Test Report: Hyper-V_Windows 17967

                    
                      10ecd0aeb1ec35670d13066c60edb6e287060cba:2024-01-16:32725
                    
                

Test fail (21/212)

x
+
TestOffline (284.3s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-748400 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p offline-docker-748400 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: exit status 90 (3m28.7702503s)

                                                
                                                
-- stdout --
	* [offline-docker-748400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node offline-docker-748400 in cluster offline-docker-748400
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Found network options:
	  - HTTP_PROXY=172.16.1.1:1
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	  - HTTP_PROXY=172.16.1.1:1
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:30:13.083195   10356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0116 03:30:13.178191   10356 out.go:296] Setting OutFile to fd 736 ...
	I0116 03:30:13.179193   10356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:30:13.179193   10356 out.go:309] Setting ErrFile to fd 840...
	I0116 03:30:13.179193   10356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:30:13.203186   10356 out.go:303] Setting JSON to false
	I0116 03:30:13.207180   10356 start.go:128] hostinfo: {"hostname":"minikube3","uptime":54204,"bootTime":1705321609,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0116 03:30:13.207180   10356 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0116 03:30:13.209183   10356 out.go:177] * [offline-docker-748400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0116 03:30:13.210180   10356 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:30:13.210180   10356 notify.go:220] Checking for updates...
	I0116 03:30:13.211179   10356 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:30:13.212181   10356 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0116 03:30:13.213197   10356 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:30:13.215194   10356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:30:13.217200   10356 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:30:19.607643   10356 out.go:177] * Using the hyperv driver based on user configuration
	I0116 03:30:19.608420   10356 start.go:298] selected driver: hyperv
	I0116 03:30:19.608420   10356 start.go:902] validating driver "hyperv" against <nil>
	I0116 03:30:19.608703   10356 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:30:19.662108   10356 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 03:30:19.665223   10356 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:30:19.665223   10356 cni.go:84] Creating CNI manager for ""
	I0116 03:30:19.665223   10356 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0116 03:30:19.665223   10356 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 03:30:19.665223   10356 start_flags.go:321] config:
	{Name:offline-docker-748400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-748400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:30:19.666080   10356 iso.go:125] acquiring lock: {Name:mk2c0b62d272a573835231fdc54419c800e07e34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:30:19.667146   10356 out.go:177] * Starting control plane node offline-docker-748400 in cluster offline-docker-748400
	I0116 03:30:19.668081   10356 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0116 03:30:19.668081   10356 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0116 03:30:19.668081   10356 cache.go:56] Caching tarball of preloaded images
	I0116 03:30:19.669068   10356 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0116 03:30:19.669068   10356 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0116 03:30:19.669068   10356 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\offline-docker-748400\config.json ...
	I0116 03:30:19.669068   10356 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\offline-docker-748400\config.json: {Name:mkc954fce93a43120c8305ea80afd83bd2d8c0bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:30:19.671081   10356 start.go:365] acquiring machines lock for offline-docker-748400: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:30:19.671081   10356 start.go:369] acquired machines lock for "offline-docker-748400" in 0s
	I0116 03:30:19.671081   10356 start.go:93] Provisioning new machine with config: &{Name:offline-docker-748400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-748400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0116 03:30:19.671081   10356 start.go:125] createHost starting for "" (driver="hyperv")
	I0116 03:30:19.672080   10356 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0116 03:30:19.672080   10356 start.go:159] libmachine.API.Create for "offline-docker-748400" (driver="hyperv")
	I0116 03:30:19.672080   10356 client.go:168] LocalClient.Create starting
	I0116 03:30:19.673076   10356 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0116 03:30:19.673076   10356 main.go:141] libmachine: Decoding PEM data...
	I0116 03:30:19.673076   10356 main.go:141] libmachine: Parsing certificate...
	I0116 03:30:19.673076   10356 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0116 03:30:19.674079   10356 main.go:141] libmachine: Decoding PEM data...
	I0116 03:30:19.674079   10356 main.go:141] libmachine: Parsing certificate...
	I0116 03:30:19.674079   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0116 03:30:21.935618   10356 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0116 03:30:21.935618   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:21.935618   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0116 03:30:24.286600   10356 main.go:141] libmachine: [stdout =====>] : False
	
	I0116 03:30:24.286600   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:24.286600   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0116 03:30:25.774323   10356 main.go:141] libmachine: [stdout =====>] : True
	
	I0116 03:30:25.774377   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:25.774377   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0116 03:30:29.759193   10356 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0116 03:30:29.759193   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:29.761865   10356 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 03:30:30.167497   10356 main.go:141] libmachine: Creating SSH key...
	I0116 03:30:30.292973   10356 main.go:141] libmachine: Creating VM...
	I0116 03:30:30.292973   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0116 03:30:33.124528   10356 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0116 03:30:33.124528   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:33.125061   10356 main.go:141] libmachine: Using switch "Default Switch"
	I0116 03:30:33.125061   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0116 03:30:34.965417   10356 main.go:141] libmachine: [stdout =====>] : True
	
	I0116 03:30:34.965417   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:34.965501   10356 main.go:141] libmachine: Creating VHD
	I0116 03:30:34.965501   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\offline-docker-748400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0116 03:30:38.804860   10356 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\offline-docker-748400\fixe
	                          d.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 8CCAD214-7584-40F7-86BA-E201B471850E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0116 03:30:38.804947   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:38.804947   10356 main.go:141] libmachine: Writing magic tar header
	I0116 03:30:38.805025   10356 main.go:141] libmachine: Writing SSH key tar header
	I0116 03:30:38.814371   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\offline-docker-748400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\offline-docker-748400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0116 03:30:41.997294   10356 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:30:41.997294   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:41.997450   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\offline-docker-748400\disk.vhd' -SizeBytes 20000MB
	I0116 03:30:44.454198   10356 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:30:44.454419   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:44.454473   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM offline-docker-748400 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\offline-docker-748400' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0116 03:30:47.929916   10356 main.go:141] libmachine: [stdout =====>] : 
	Name                  State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                  ----- ----------- ----------------- ------   ------             -------
	offline-docker-748400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0116 03:30:47.930089   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:47.930089   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName offline-docker-748400 -DynamicMemoryEnabled $false
	I0116 03:30:50.154252   10356 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:30:50.154252   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:50.154370   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor offline-docker-748400 -Count 2
	I0116 03:30:52.314398   10356 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:30:52.314398   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:52.314398   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName offline-docker-748400 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\offline-docker-748400\boot2docker.iso'
	I0116 03:30:54.871225   10356 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:30:54.871513   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:54.871597   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName offline-docker-748400 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\offline-docker-748400\disk.vhd'
	I0116 03:30:57.447174   10356 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:30:57.447174   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:30:57.447278   10356 main.go:141] libmachine: Starting VM...
	I0116 03:30:57.447278   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM offline-docker-748400
	I0116 03:31:00.299313   10356 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:31:00.299313   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:00.299313   10356 main.go:141] libmachine: Waiting for host to start...
	I0116 03:31:00.299313   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:31:02.636198   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:31:02.636198   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:02.636443   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:31:05.138902   10356 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:31:05.138902   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:06.139268   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:31:08.330601   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:31:08.330601   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:08.330601   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:31:10.849444   10356 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:31:10.849536   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:11.850712   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:31:14.029313   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:31:14.029313   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:14.029313   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:31:16.606672   10356 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:31:16.606672   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:17.610964   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:31:19.838599   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:31:19.838599   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:19.838664   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:31:22.316908   10356 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:31:22.317066   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:23.330076   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:31:25.525184   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:31:25.525281   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:25.525395   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:31:28.123866   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:31:28.123943   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:28.123943   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:31:30.256768   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:31:30.256768   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:30.256959   10356 machine.go:88] provisioning docker machine ...
	I0116 03:31:30.256959   10356 buildroot.go:166] provisioning hostname "offline-docker-748400"
	I0116 03:31:30.256959   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:31:32.434608   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:31:32.434608   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:32.434691   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:31:34.958915   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:31:34.958915   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:34.965615   10356 main.go:141] libmachine: Using SSH client type: native
	I0116 03:31:34.978524   10356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.115.126 22 <nil> <nil>}
	I0116 03:31:34.978524   10356 main.go:141] libmachine: About to run SSH command:
	sudo hostname offline-docker-748400 && echo "offline-docker-748400" | sudo tee /etc/hostname
	I0116 03:31:35.141446   10356 main.go:141] libmachine: SSH cmd err, output: <nil>: offline-docker-748400
	
	I0116 03:31:35.141550   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:31:37.264272   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:31:37.264272   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:37.264272   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:31:39.774207   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:31:39.774207   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:39.780948   10356 main.go:141] libmachine: Using SSH client type: native
	I0116 03:31:39.781674   10356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.115.126 22 <nil> <nil>}
	I0116 03:31:39.781674   10356 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\soffline-docker-748400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 offline-docker-748400/g' /etc/hosts;
				else 
					echo '127.0.1.1 offline-docker-748400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:31:39.949877   10356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:31:39.949877   10356 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0116 03:31:39.949877   10356 buildroot.go:174] setting up certificates
	I0116 03:31:39.949877   10356 provision.go:83] configureAuth start
	I0116 03:31:39.949877   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:31:42.048946   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:31:42.048946   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:42.048946   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:31:44.525758   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:31:44.525876   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:44.525876   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:31:46.611506   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:31:46.611506   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:46.611650   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:31:49.137882   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:31:49.138051   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:49.138051   10356 provision.go:138] copyHostCerts
	I0116 03:31:49.138132   10356 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0116 03:31:49.138132   10356 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0116 03:31:49.139031   10356 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0116 03:31:49.140273   10356 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0116 03:31:49.140273   10356 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0116 03:31:49.140846   10356 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0116 03:31:49.142497   10356 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0116 03:31:49.142497   10356 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0116 03:31:49.142979   10356 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0116 03:31:49.144142   10356 provision.go:112] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.offline-docker-748400 san=[172.27.115.126 172.27.115.126 localhost 127.0.0.1 minikube offline-docker-748400]
	I0116 03:31:49.334572   10356 provision.go:172] copyRemoteCerts
	I0116 03:31:49.346734   10356 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:31:49.347742   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:31:51.470458   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:31:51.470638   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:51.470638   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:31:54.102110   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:31:54.102110   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:54.102549   10356 sshutil.go:53] new ssh client: &{IP:172.27.115.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\offline-docker-748400\id_rsa Username:docker}
	I0116 03:31:54.212105   10356 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.865339s)
	I0116 03:31:54.212998   10356 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 03:31:54.253550   10356 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:31:54.289427   10356 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0116 03:31:54.327383   10356 provision.go:86] duration metric: configureAuth took 14.3774117s
	I0116 03:31:54.327523   10356 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:31:54.328258   10356 config.go:182] Loaded profile config "offline-docker-748400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:31:54.328348   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:31:56.448947   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:31:56.449122   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:56.449217   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:31:58.988643   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:31:58.988643   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:31:58.996177   10356 main.go:141] libmachine: Using SSH client type: native
	I0116 03:31:58.996324   10356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.115.126 22 <nil> <nil>}
	I0116 03:31:58.996324   10356 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0116 03:31:59.151570   10356 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0116 03:31:59.151570   10356 buildroot.go:70] root file system type: tmpfs
	I0116 03:31:59.151830   10356 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0116 03:31:59.151933   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:32:01.277896   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:32:01.277954   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:01.277954   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:32:03.892508   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:32:03.892701   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:03.898817   10356 main.go:141] libmachine: Using SSH client type: native
	I0116 03:32:03.899565   10356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.115.126 22 <nil> <nil>}
	I0116 03:32:03.899565   10356 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="HTTP_PROXY=172.16.1.1:1"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0116 03:32:04.061347   10356 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=HTTP_PROXY=172.16.1.1:1
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0116 03:32:04.061587   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:32:06.196857   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:32:06.196857   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:06.196857   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:32:08.754472   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:32:08.754647   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:08.760933   10356 main.go:141] libmachine: Using SSH client type: native
	I0116 03:32:08.761096   10356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.115.126 22 <nil> <nil>}
	I0116 03:32:08.761096   10356 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0116 03:32:09.728871   10356 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0116 03:32:09.728871   10356 machine.go:91] provisioned docker machine in 39.4716514s
	I0116 03:32:09.728871   10356 client.go:171] LocalClient.Create took 1m50.0560654s
	I0116 03:32:09.728871   10356 start.go:167] duration metric: libmachine.API.Create for "offline-docker-748400" took 1m50.0560654s
	I0116 03:32:09.728871   10356 start.go:300] post-start starting for "offline-docker-748400" (driver="hyperv")
	I0116 03:32:09.728871   10356 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:32:09.752543   10356 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:32:09.752543   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:32:11.835940   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:32:11.835940   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:11.835940   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:32:14.313953   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:32:14.313953   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:14.313953   10356 sshutil.go:53] new ssh client: &{IP:172.27.115.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\offline-docker-748400\id_rsa Username:docker}
	I0116 03:32:14.422478   10356 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6699037s)
	I0116 03:32:14.436630   10356 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:32:14.443233   10356 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:32:14.443233   10356 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0116 03:32:14.443817   10356 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0116 03:32:14.444874   10356 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> 135082.pem in /etc/ssl/certs
	I0116 03:32:14.459136   10356 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:32:14.475984   10356 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem --> /etc/ssl/certs/135082.pem (1708 bytes)
	I0116 03:32:14.516353   10356 start.go:303] post-start completed in 4.7874504s
	I0116 03:32:14.519972   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:32:16.606378   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:32:16.606378   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:16.606378   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:32:19.079805   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:32:19.079805   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:19.079805   10356 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\offline-docker-748400\config.json ...
	I0116 03:32:19.083500   10356 start.go:128] duration metric: createHost completed in 1m59.4116312s
	I0116 03:32:19.083595   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:32:21.218488   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:32:21.218488   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:21.218588   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:32:23.783670   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:32:23.783670   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:23.790139   10356 main.go:141] libmachine: Using SSH client type: native
	I0116 03:32:23.790848   10356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.115.126 22 <nil> <nil>}
	I0116 03:32:23.790848   10356 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0116 03:32:23.948137   10356 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705375943.948590998
	
	I0116 03:32:23.948137   10356 fix.go:206] guest clock: 1705375943.948590998
	I0116 03:32:23.948137   10356 fix.go:219] Guest: 2024-01-16 03:32:23.948590998 +0000 UTC Remote: 2024-01-16 03:32:19.0835004 +0000 UTC m=+126.117582901 (delta=4.865090598s)
	I0116 03:32:23.948137   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:32:26.072088   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:32:26.072292   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:26.072369   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:32:28.601553   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:32:28.601553   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:28.608960   10356 main.go:141] libmachine: Using SSH client type: native
	I0116 03:32:28.609939   10356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.115.126 22 <nil> <nil>}
	I0116 03:32:28.609939   10356 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705375943
	I0116 03:32:28.759167   10356 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan 16 03:32:23 UTC 2024
	
	I0116 03:32:28.759232   10356 fix.go:226] clock set: Tue Jan 16 03:32:23 UTC 2024
	 (err=<nil>)
	I0116 03:32:28.759232   10356 start.go:83] releasing machines lock for "offline-docker-748400", held for 2m9.0872993s
	I0116 03:32:28.759553   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:32:30.925068   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:32:30.925068   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:30.925068   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:32:33.569898   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:32:33.569898   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:33.571106   10356 out.go:177] * Found network options:
	I0116 03:32:33.576002   10356 out.go:177]   - HTTP_PROXY=172.16.1.1:1
	W0116 03:32:33.576718   10356 out.go:239] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.27.115.126).
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (172.27.115.126).
	I0116 03:32:33.577567   10356 out.go:177] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I0116 03:32:33.577567   10356 out.go:177]   - HTTP_PROXY=172.16.1.1:1
	I0116 03:32:33.582880   10356 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:32:33.582880   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:32:33.595937   10356 ssh_runner.go:195] Run: cat /version.json
	I0116 03:32:33.595937   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM offline-docker-748400 ).state
	I0116 03:32:35.875310   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:32:35.875310   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:35.875310   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:32:35.893072   10356 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:32:35.893256   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:35.893365   10356 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM offline-docker-748400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:32:38.601376   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:32:38.601376   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:38.601376   10356 sshutil.go:53] new ssh client: &{IP:172.27.115.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\offline-docker-748400\id_rsa Username:docker}
	I0116 03:32:38.620679   10356 main.go:141] libmachine: [stdout =====>] : 172.27.115.126
	
	I0116 03:32:38.620679   10356 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:32:38.620953   10356 sshutil.go:53] new ssh client: &{IP:172.27.115.126 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\offline-docker-748400\id_rsa Username:docker}
	I0116 03:32:38.829993   10356 ssh_runner.go:235] Completed: cat /version.json: (5.2340222s)
	I0116 03:32:38.829993   10356 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.2470788s)
	I0116 03:32:38.844341   10356 ssh_runner.go:195] Run: systemctl --version
	I0116 03:32:38.867903   10356 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:32:38.875982   10356 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:32:38.892338   10356 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:32:38.915604   10356 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:32:38.915604   10356 start.go:475] detecting cgroup driver to use...
	I0116 03:32:38.915604   10356 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:32:38.965559   10356 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0116 03:32:39.001369   10356 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 03:32:39.018975   10356 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0116 03:32:39.036008   10356 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0116 03:32:39.073967   10356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 03:32:39.122902   10356 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 03:32:39.154522   10356 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 03:32:39.187419   10356 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:32:39.222342   10356 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 03:32:39.256838   10356 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:32:39.283977   10356 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:32:39.317144   10356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:32:39.493575   10356 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 03:32:39.523727   10356 start.go:475] detecting cgroup driver to use...
	I0116 03:32:39.538300   10356 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0116 03:32:39.570962   10356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:32:39.601955   10356 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:32:39.650520   10356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:32:39.686599   10356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 03:32:39.723471   10356 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 03:32:39.777049   10356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 03:32:39.799239   10356 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:32:39.848735   10356 ssh_runner.go:195] Run: which cri-dockerd
	I0116 03:32:39.870788   10356 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0116 03:32:39.887104   10356 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0116 03:32:39.932027   10356 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0116 03:32:40.115981   10356 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0116 03:32:40.278623   10356 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0116 03:32:40.278623   10356 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0116 03:32:40.323256   10356 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:32:40.490347   10356 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0116 03:33:41.603316   10356 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1125083s)
	I0116 03:33:41.619152   10356 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0116 03:33:41.652082   10356 out.go:177] 
	W0116 03:33:41.652082   10356 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2024-01-16 03:31:19 UTC, ends at Tue 2024-01-16 03:33:41 UTC. --
	Jan 16 03:32:09 offline-docker-748400 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.288653596Z" level=info msg="Starting up"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.289650210Z" level=info msg="containerd not running, starting managed containerd"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.290852728Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.323964718Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.348417281Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.348572083Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.350748315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.350841316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.351051920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.351138921Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.351376724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.351555227Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.351674829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.351810231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352152736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352281638Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352298138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352436740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352555742Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352621643Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352710744Z" level=info msg="metadata content store policy set" policy=shared
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.362884995Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363004197Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363025397Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363080198Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363102498Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363169299Z" level=info msg="NRI interface is disabled by configuration."
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363190199Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363346302Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363449903Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363470803Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363486104Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363501004Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363517904Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363531504Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363544605Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363558305Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363572505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363585605Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363598105Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363756308Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365013826Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365175529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365206529Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365283830Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365377132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365508734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365532634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365549534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365567634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365585335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365602635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365618835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365637536Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365755237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365852139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365874939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365896839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365915540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365935040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365952340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365968140Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365987941Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.366004341Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.366060142Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.367189058Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.367371161Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.367502763Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.367533864Z" level=info msg="containerd successfully booted in 0.045047s"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.403502396Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.419097327Z" level=info msg="Loading containers: start."
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.645884185Z" level=info msg="Loading containers: done."
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.666962697Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.667313402Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.667443004Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.667573406Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.667698508Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.667940212Z" level=info msg="Daemon has completed initialization"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.725809569Z" level=info msg="API listen on [::]:2376"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.725951571Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 16 03:32:09 offline-docker-748400 systemd[1]: Started Docker Application Container Engine.
	Jan 16 03:32:40 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:40.511776212Z" level=info msg="Processing signal 'terminated'"
	Jan 16 03:32:40 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:40.513513712Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 16 03:32:40 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:40.513545712Z" level=info msg="Daemon shutdown complete"
	Jan 16 03:32:40 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:40.513710712Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 16 03:32:40 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:40.513956512Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 16 03:32:40 offline-docker-748400 systemd[1]: Stopping Docker Application Container Engine...
	Jan 16 03:32:41 offline-docker-748400 systemd[1]: docker.service: Succeeded.
	Jan 16 03:32:41 offline-docker-748400 systemd[1]: Stopped Docker Application Container Engine.
	Jan 16 03:32:41 offline-docker-748400 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:32:41 offline-docker-748400 dockerd[1006]: time="2024-01-16T03:32:41.590477312Z" level=info msg="Starting up"
	Jan 16 03:33:41 offline-docker-748400 dockerd[1006]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 16 03:33:41 offline-docker-748400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 16 03:33:41 offline-docker-748400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 16 03:33:41 offline-docker-748400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2024-01-16 03:31:19 UTC, ends at Tue 2024-01-16 03:33:41 UTC. --
	Jan 16 03:32:09 offline-docker-748400 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.288653596Z" level=info msg="Starting up"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.289650210Z" level=info msg="containerd not running, starting managed containerd"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.290852728Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=679
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.323964718Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.348417281Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.348572083Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.350748315Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.350841316Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.351051920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.351138921Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.351376724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.351555227Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.351674829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.351810231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352152736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352281638Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352298138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352436740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352555742Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352621643Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.352710744Z" level=info msg="metadata content store policy set" policy=shared
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.362884995Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363004197Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363025397Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363080198Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363102498Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363169299Z" level=info msg="NRI interface is disabled by configuration."
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363190199Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363346302Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363449903Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363470803Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363486104Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363501004Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363517904Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363531504Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363544605Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363558305Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363572505Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363585605Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363598105Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.363756308Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365013826Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365175529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365206529Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365283830Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365377132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365508734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365532634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365549534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365567634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365585335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365602635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365618835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365637536Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365755237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365852139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365874939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365896839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365915540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365935040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365952340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365968140Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.365987941Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.366004341Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.366060142Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.367189058Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.367371161Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.367502763Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 16 03:32:09 offline-docker-748400 dockerd[679]: time="2024-01-16T03:32:09.367533864Z" level=info msg="containerd successfully booted in 0.045047s"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.403502396Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.419097327Z" level=info msg="Loading containers: start."
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.645884185Z" level=info msg="Loading containers: done."
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.666962697Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.667313402Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.667443004Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.667573406Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.667698508Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.667940212Z" level=info msg="Daemon has completed initialization"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.725809569Z" level=info msg="API listen on [::]:2376"
	Jan 16 03:32:09 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:09.725951571Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 16 03:32:09 offline-docker-748400 systemd[1]: Started Docker Application Container Engine.
	Jan 16 03:32:40 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:40.511776212Z" level=info msg="Processing signal 'terminated'"
	Jan 16 03:32:40 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:40.513513712Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 16 03:32:40 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:40.513545712Z" level=info msg="Daemon shutdown complete"
	Jan 16 03:32:40 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:40.513710712Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 16 03:32:40 offline-docker-748400 dockerd[673]: time="2024-01-16T03:32:40.513956512Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 16 03:32:40 offline-docker-748400 systemd[1]: Stopping Docker Application Container Engine...
	Jan 16 03:32:41 offline-docker-748400 systemd[1]: docker.service: Succeeded.
	Jan 16 03:32:41 offline-docker-748400 systemd[1]: Stopped Docker Application Container Engine.
	Jan 16 03:32:41 offline-docker-748400 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:32:41 offline-docker-748400 dockerd[1006]: time="2024-01-16T03:32:41.590477312Z" level=info msg="Starting up"
	Jan 16 03:33:41 offline-docker-748400 dockerd[1006]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 16 03:33:41 offline-docker-748400 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 16 03:33:41 offline-docker-748400 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 16 03:33:41 offline-docker-748400 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0116 03:33:41.653532   10356 out.go:239] * 
	* 
	W0116 03:33:41.655162   10356 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 03:33:41.656128   10356 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-windows-amd64.exe start -p offline-docker-748400 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv failed: exit status 90
panic.go:523: *** TestOffline FAILED at 2024-01-16 03:33:41.9162854 +0000 UTC m=+7035.855646601
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-748400 -n offline-docker-748400
E0116 03:33:46.620552   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p offline-docker-748400 -n offline-docker-748400: exit status 6 (12.3589214s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:33:42.058038    8460 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0116 03:33:54.199396    8460 status.go:415] kubeconfig endpoint: extract IP: "offline-docker-748400" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "offline-docker-748400" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "offline-docker-748400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-748400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-748400: (1m2.9673339s)
--- FAIL: TestOffline (284.30s)

                                                
                                    
x
+
TestAddons/parallel/Registry (71.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 21.4752ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-cbxwr" [dcd08552-2136-4819-9883-15f69c75e8f2] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.021673s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9l528" [fdd20706-293a-4cec-a5fa-d4bc0401e456] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.017748s
addons_test.go:340: (dbg) Run:  kubectl --context addons-179200 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-179200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-179200 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.8090129s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-179200 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-179200 ip: (2.835782s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0116 01:44:01.761835    4344 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-179200 ip"
2024/01/16 01:44:04 [DEBUG] GET http://172.27.117.123:5000
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-179200 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-179200 addons disable registry --alsologtostderr -v=1: (15.6253678s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-179200 -n addons-179200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-179200 -n addons-179200: (13.2188524s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-179200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-179200 logs -n 25: (10.0345811s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-131100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:36 UTC |                     |
	|         | -p download-only-131100                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:36 UTC | 16 Jan 24 01:36 UTC |
	| delete  | -p download-only-131100                                                                     | download-only-131100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:36 UTC | 16 Jan 24 01:36 UTC |
	| start   | -o=json --download-only                                                                     | download-only-399400 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:36 UTC |                     |
	|         | -p download-only-399400                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC | 16 Jan 24 01:37 UTC |
	| delete  | -p download-only-399400                                                                     | download-only-399400 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC | 16 Jan 24 01:37 UTC |
	| start   | -o=json --download-only                                                                     | download-only-690000 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC |                     |
	|         | -p download-only-690000                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC | 16 Jan 24 01:37 UTC |
	| delete  | -p download-only-690000                                                                     | download-only-690000 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC | 16 Jan 24 01:37 UTC |
	| delete  | -p download-only-131100                                                                     | download-only-131100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC | 16 Jan 24 01:37 UTC |
	| delete  | -p download-only-399400                                                                     | download-only-399400 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC | 16 Jan 24 01:37 UTC |
	| delete  | -p download-only-690000                                                                     | download-only-690000 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC | 16 Jan 24 01:37 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-226800 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC |                     |
	|         | binary-mirror-226800                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:52661                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-226800                                                                     | binary-mirror-226800 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC | 16 Jan 24 01:37 UTC |
	| addons  | disable dashboard -p                                                                        | addons-179200        | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC |                     |
	|         | addons-179200                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-179200        | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC |                     |
	|         | addons-179200                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-179200 --wait=true                                                                | addons-179200        | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC | 16 Jan 24 01:43 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-179200 addons                                                                        | addons-179200        | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:43 UTC | 16 Jan 24 01:44 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-179200 ssh cat                                                                       | addons-179200        | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:43 UTC | 16 Jan 24 01:44 UTC |
	|         | /opt/local-path-provisioner/pvc-2518c260-1d7d-459f-be84-8322ff56bda9_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-179200 ip                                                                            | addons-179200        | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:44 UTC | 16 Jan 24 01:44 UTC |
	| addons  | addons-179200 addons disable                                                                | addons-179200        | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:44 UTC | 16 Jan 24 01:44 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-179200 addons disable                                                                | addons-179200        | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:44 UTC | 16 Jan 24 01:44 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-179200        | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:44 UTC |                     |
	|         | -p addons-179200                                                                            |                      |                   |         |                     |                     |
	| addons  | addons-179200 addons disable                                                                | addons-179200        | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:44 UTC |                     |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 01:37:30
	Running on machine: minikube3
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 01:37:30.911374   10524 out.go:296] Setting OutFile to fd 976 ...
	I0116 01:37:30.911374   10524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:37:30.912429   10524 out.go:309] Setting ErrFile to fd 980...
	I0116 01:37:30.912483   10524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:37:30.935178   10524 out.go:303] Setting JSON to false
	I0116 01:37:30.938390   10524 start.go:128] hostinfo: {"hostname":"minikube3","uptime":47441,"bootTime":1705321609,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0116 01:37:30.938390   10524 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0116 01:37:30.939799   10524 out.go:177] * [addons-179200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0116 01:37:30.940589   10524 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 01:37:30.940589   10524 notify.go:220] Checking for updates...
	I0116 01:37:30.941503   10524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 01:37:30.941503   10524 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0116 01:37:30.942565   10524 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 01:37:30.943442   10524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 01:37:30.943881   10524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 01:37:36.420642   10524 out.go:177] * Using the hyperv driver based on user configuration
	I0116 01:37:36.420795   10524 start.go:298] selected driver: hyperv
	I0116 01:37:36.420795   10524 start.go:902] validating driver "hyperv" against <nil>
	I0116 01:37:36.421418   10524 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 01:37:36.471995   10524 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 01:37:36.473441   10524 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 01:37:36.473580   10524 cni.go:84] Creating CNI manager for ""
	I0116 01:37:36.473580   10524 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0116 01:37:36.473640   10524 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 01:37:36.473640   10524 start_flags.go:321] config:
	{Name:addons-179200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-179200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 01:37:36.473640   10524 iso.go:125] acquiring lock: {Name:mk2c0b62d272a573835231fdc54419c800e07e34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 01:37:36.475542   10524 out.go:177] * Starting control plane node addons-179200 in cluster addons-179200
	I0116 01:37:36.476327   10524 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0116 01:37:36.476542   10524 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0116 01:37:36.476542   10524 cache.go:56] Caching tarball of preloaded images
	I0116 01:37:36.476542   10524 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0116 01:37:36.477188   10524 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0116 01:37:36.477785   10524 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\config.json ...
	I0116 01:37:36.477923   10524 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\config.json: {Name:mka8f15f0b8175cd7f4803a929ed22980b13ed77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:37:36.479293   10524 start.go:365] acquiring machines lock for addons-179200: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 01:37:36.479293   10524 start.go:369] acquired machines lock for "addons-179200" in 0s
	I0116 01:37:36.479293   10524 start.go:93] Provisioning new machine with config: &{Name:addons-179200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:addons-179200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0116 01:37:36.479893   10524 start.go:125] createHost starting for "" (driver="hyperv")
	I0116 01:37:36.480608   10524 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0116 01:37:36.480608   10524 start.go:159] libmachine.API.Create for "addons-179200" (driver="hyperv")
	I0116 01:37:36.480608   10524 client.go:168] LocalClient.Create starting
	I0116 01:37:36.481467   10524 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0116 01:37:36.851472   10524 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0116 01:37:37.194491   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0116 01:37:39.325742   10524 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0116 01:37:39.325742   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:37:39.325742   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0116 01:37:41.048257   10524 main.go:141] libmachine: [stdout =====>] : False
	
	I0116 01:37:41.048389   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:37:41.048555   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0116 01:37:42.516211   10524 main.go:141] libmachine: [stdout =====>] : True
	
	I0116 01:37:42.516211   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:37:42.516211   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0116 01:37:46.235485   10524 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0116 01:37:46.235565   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:37:46.238505   10524 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 01:37:46.654283   10524 main.go:141] libmachine: Creating SSH key...
	I0116 01:37:46.959450   10524 main.go:141] libmachine: Creating VM...
	I0116 01:37:46.959450   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0116 01:37:49.746437   10524 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0116 01:37:49.746644   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:37:49.746795   10524 main.go:141] libmachine: Using switch "Default Switch"
	I0116 01:37:49.746795   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0116 01:37:51.454750   10524 main.go:141] libmachine: [stdout =====>] : True
	
	I0116 01:37:51.454818   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:37:51.454818   10524 main.go:141] libmachine: Creating VHD
	I0116 01:37:51.454910   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0116 01:37:55.174657   10524 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 5426B7F4-B154-4F65-B3BA-F80FFAD45A4D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0116 01:37:55.174954   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:37:55.174954   10524 main.go:141] libmachine: Writing magic tar header
	I0116 01:37:55.175121   10524 main.go:141] libmachine: Writing SSH key tar header
	I0116 01:37:55.184030   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0116 01:37:58.311718   10524 main.go:141] libmachine: [stdout =====>] : 
	I0116 01:37:58.311718   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:37:58.311718   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\disk.vhd' -SizeBytes 20000MB
	I0116 01:38:00.821557   10524 main.go:141] libmachine: [stdout =====>] : 
	I0116 01:38:00.821557   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:00.821557   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-179200 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0116 01:38:04.953451   10524 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-179200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0116 01:38:04.953681   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:04.953788   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-179200 -DynamicMemoryEnabled $false
	I0116 01:38:07.200425   10524 main.go:141] libmachine: [stdout =====>] : 
	I0116 01:38:07.200425   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:07.200509   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-179200 -Count 2
	I0116 01:38:09.366441   10524 main.go:141] libmachine: [stdout =====>] : 
	I0116 01:38:09.366641   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:09.366717   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-179200 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\boot2docker.iso'
	I0116 01:38:11.902845   10524 main.go:141] libmachine: [stdout =====>] : 
	I0116 01:38:11.902845   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:11.902845   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-179200 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\disk.vhd'
	I0116 01:38:14.498618   10524 main.go:141] libmachine: [stdout =====>] : 
	I0116 01:38:14.498896   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:14.499041   10524 main.go:141] libmachine: Starting VM...
	I0116 01:38:14.499041   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-179200
	I0116 01:38:17.351787   10524 main.go:141] libmachine: [stdout =====>] : 
	I0116 01:38:17.351861   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:17.351861   10524 main.go:141] libmachine: Waiting for host to start...
	I0116 01:38:17.351861   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:38:19.608204   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:38:19.608204   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:19.608204   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:38:22.037701   10524 main.go:141] libmachine: [stdout =====>] : 
	I0116 01:38:22.037779   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:23.039545   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:38:25.222767   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:38:25.222935   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:25.223027   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:38:27.709534   10524 main.go:141] libmachine: [stdout =====>] : 
	I0116 01:38:27.709534   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:28.713026   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:38:30.905101   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:38:30.905316   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:30.905316   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:38:33.446842   10524 main.go:141] libmachine: [stdout =====>] : 
	I0116 01:38:33.446842   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:34.450457   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:38:36.635921   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:38:36.636169   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:36.636269   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:38:39.108966   10524 main.go:141] libmachine: [stdout =====>] : 
	I0116 01:38:39.108966   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:40.115829   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:38:42.304125   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:38:42.304125   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:42.304125   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:38:44.848162   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:38:44.848162   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:44.848262   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:38:46.955044   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:38:46.955267   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:46.955267   10524 machine.go:88] provisioning docker machine ...
	I0116 01:38:46.955394   10524 buildroot.go:166] provisioning hostname "addons-179200"
	I0116 01:38:46.955490   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:38:49.187375   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:38:49.187375   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:49.187375   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:38:51.687589   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:38:51.687589   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:51.694080   10524 main.go:141] libmachine: Using SSH client type: native
	I0116 01:38:51.704134   10524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x826120] 0x828c60 <nil>  [] 0s} 172.27.117.123 22 <nil> <nil>}
	I0116 01:38:51.704134   10524 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-179200 && echo "addons-179200" | sudo tee /etc/hostname
	I0116 01:38:51.894188   10524 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-179200
	
	I0116 01:38:51.894188   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:38:53.972493   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:38:53.972678   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:53.972678   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:38:56.481426   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:38:56.481621   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:56.487765   10524 main.go:141] libmachine: Using SSH client type: native
	I0116 01:38:56.488305   10524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x826120] 0x828c60 <nil>  [] 0s} 172.27.117.123 22 <nil> <nil>}
	I0116 01:38:56.488305   10524 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-179200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-179200/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-179200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 01:38:56.658469   10524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 01:38:56.658469   10524 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0116 01:38:56.658469   10524 buildroot.go:174] setting up certificates
	I0116 01:38:56.658469   10524 provision.go:83] configureAuth start
	I0116 01:38:56.658469   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:38:58.744684   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:38:58.744684   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:38:58.744684   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:39:01.360507   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:39:01.360507   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:01.360576   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:39:03.522160   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:39:03.522338   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:03.522338   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:39:06.039420   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:39:06.039420   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:06.039518   10524 provision.go:138] copyHostCerts
	I0116 01:39:06.040377   10524 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0116 01:39:06.042096   10524 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0116 01:39:06.042956   10524 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0116 01:39:06.044737   10524 provision.go:112] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-179200 san=[172.27.117.123 172.27.117.123 localhost 127.0.0.1 minikube addons-179200]
	I0116 01:39:06.239199   10524 provision.go:172] copyRemoteCerts
	I0116 01:39:06.253294   10524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 01:39:06.253294   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:39:08.350295   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:39:08.350461   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:08.350541   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:39:10.867182   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:39:10.867182   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:10.867182   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:39:11.001218   10524 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7478935s)
	I0116 01:39:11.001899   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 01:39:11.038653   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0116 01:39:11.079703   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 01:39:11.117232   10524 provision.go:86] duration metric: configureAuth took 14.4586221s
	I0116 01:39:11.117326   10524 buildroot.go:189] setting minikube options for container-runtime
	I0116 01:39:11.118125   10524 config.go:182] Loaded profile config "addons-179200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 01:39:11.118270   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:39:13.180847   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:39:13.181077   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:13.181077   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:39:15.658382   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:39:15.658651   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:15.664192   10524 main.go:141] libmachine: Using SSH client type: native
	I0116 01:39:15.665088   10524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x826120] 0x828c60 <nil>  [] 0s} 172.27.117.123 22 <nil> <nil>}
	I0116 01:39:15.665088   10524 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0116 01:39:15.821200   10524 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0116 01:39:15.821292   10524 buildroot.go:70] root file system type: tmpfs
	I0116 01:39:15.821418   10524 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0116 01:39:15.821418   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:39:17.920358   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:39:17.920358   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:17.920462   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:39:20.385876   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:39:20.385876   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:20.392464   10524 main.go:141] libmachine: Using SSH client type: native
	I0116 01:39:20.393233   10524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x826120] 0x828c60 <nil>  [] 0s} 172.27.117.123 22 <nil> <nil>}
	I0116 01:39:20.393402   10524 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0116 01:39:20.568459   10524 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0116 01:39:20.569010   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:39:22.683154   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:39:22.683154   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:22.683154   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:39:25.132080   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:39:25.132080   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:25.137760   10524 main.go:141] libmachine: Using SSH client type: native
	I0116 01:39:25.138474   10524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x826120] 0x828c60 <nil>  [] 0s} 172.27.117.123 22 <nil> <nil>}
	I0116 01:39:25.139004   10524 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0116 01:39:26.102971   10524 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0116 01:39:26.102971   10524 machine.go:91] provisioned docker machine in 39.14745s
	I0116 01:39:26.102971   10524 client.go:171] LocalClient.Create took 1m49.6216509s
	I0116 01:39:26.102971   10524 start.go:167] duration metric: libmachine.API.Create for "addons-179200" took 1m49.6216509s
	I0116 01:39:26.102971   10524 start.go:300] post-start starting for "addons-179200" (driver="hyperv")
	I0116 01:39:26.102971   10524 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 01:39:26.115957   10524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 01:39:26.116952   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:39:28.218670   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:39:28.218860   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:28.218969   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:39:30.686627   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:39:30.686705   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:30.686777   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:39:30.809204   10524 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6921352s)
	I0116 01:39:30.823935   10524 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 01:39:30.830268   10524 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 01:39:30.830430   10524 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0116 01:39:30.831029   10524 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0116 01:39:30.831029   10524 start.go:303] post-start completed in 4.7280266s
	I0116 01:39:30.833569   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:39:32.918586   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:39:32.918586   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:32.918586   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:39:35.454533   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:39:35.454533   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:35.454533   10524 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\config.json ...
	I0116 01:39:35.458360   10524 start.go:128] duration metric: createHost completed in 1m58.9776932s
	I0116 01:39:35.458360   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:39:37.573005   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:39:37.573005   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:37.573112   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:39:40.056604   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:39:40.056604   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:40.063694   10524 main.go:141] libmachine: Using SSH client type: native
	I0116 01:39:40.064570   10524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x826120] 0x828c60 <nil>  [] 0s} 172.27.117.123 22 <nil> <nil>}
	I0116 01:39:40.064570   10524 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 01:39:40.219358   10524 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705369180.220200595
	
	I0116 01:39:40.219358   10524 fix.go:206] guest clock: 1705369180.220200595
	I0116 01:39:40.219512   10524 fix.go:219] Guest: 2024-01-16 01:39:40.220200595 +0000 UTC Remote: 2024-01-16 01:39:35.4583605 +0000 UTC m=+124.729247701 (delta=4.761840095s)
	I0116 01:39:40.219512   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:39:42.299696   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:39:42.299696   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:42.299790   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:39:44.799646   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:39:44.799814   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:44.806558   10524 main.go:141] libmachine: Using SSH client type: native
	I0116 01:39:44.806652   10524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x826120] 0x828c60 <nil>  [] 0s} 172.27.117.123 22 <nil> <nil>}
	I0116 01:39:44.806652   10524 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705369180
	I0116 01:39:44.970235   10524 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan 16 01:39:40 UTC 2024
	
	I0116 01:39:44.970235   10524 fix.go:226] clock set: Tue Jan 16 01:39:40 UTC 2024
	 (err=<nil>)
	I0116 01:39:44.970235   10524 start.go:83] releasing machines lock for "addons-179200", held for 2m8.4901071s
	I0116 01:39:44.970920   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:39:47.106392   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:39:47.106745   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:47.106745   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:39:49.586919   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:39:49.586919   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:49.591549   10524 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 01:39:49.591774   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:39:49.604739   10524 ssh_runner.go:195] Run: cat /version.json
	I0116 01:39:49.604739   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:39:51.719487   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:39:51.719562   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:51.719562   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:39:51.737846   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:39:51.737846   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:51.737846   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:39:54.349261   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:39:54.350145   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:54.350331   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:39:54.366045   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:39:54.366222   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:39:54.366434   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:39:54.537859   10524 ssh_runner.go:235] Completed: cat /version.json: (4.9329638s)
	I0116 01:39:54.537859   10524 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9461874s)
	I0116 01:39:54.552547   10524 ssh_runner.go:195] Run: systemctl --version
	I0116 01:39:54.575071   10524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 01:39:54.583038   10524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 01:39:54.596786   10524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 01:39:54.622060   10524 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 01:39:54.622060   10524 start.go:475] detecting cgroup driver to use...
	I0116 01:39:54.622630   10524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 01:39:54.665455   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0116 01:39:54.694680   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 01:39:54.709859   10524 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0116 01:39:54.725038   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0116 01:39:54.754290   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 01:39:54.784998   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 01:39:54.813159   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 01:39:54.844098   10524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 01:39:54.875733   10524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 01:39:54.904687   10524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 01:39:54.933387   10524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 01:39:54.960581   10524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 01:39:55.124644   10524 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 01:39:55.150453   10524 start.go:475] detecting cgroup driver to use...
	I0116 01:39:55.166217   10524 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0116 01:39:55.199197   10524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 01:39:55.229190   10524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 01:39:55.277880   10524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 01:39:55.316625   10524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 01:39:55.350611   10524 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 01:39:55.400684   10524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 01:39:55.420141   10524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 01:39:55.462288   10524 ssh_runner.go:195] Run: which cri-dockerd
	I0116 01:39:55.481505   10524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0116 01:39:55.496209   10524 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0116 01:39:55.535046   10524 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0116 01:39:55.703006   10524 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0116 01:39:55.862185   10524 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0116 01:39:55.862185   10524 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0116 01:39:55.917570   10524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 01:39:56.092715   10524 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0116 01:39:57.569405   10524 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.476291s)
	I0116 01:39:57.583632   10524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0116 01:39:57.618532   10524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0116 01:39:57.652718   10524 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0116 01:39:57.818946   10524 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0116 01:39:57.983329   10524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 01:39:58.158656   10524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0116 01:39:58.195717   10524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0116 01:39:58.225760   10524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 01:39:58.391120   10524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0116 01:39:58.487733   10524 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0116 01:39:58.502730   10524 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0116 01:39:58.509997   10524 start.go:543] Will wait 60s for crictl version
	I0116 01:39:58.524786   10524 ssh_runner.go:195] Run: which crictl
	I0116 01:39:58.543554   10524 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 01:39:58.607088   10524 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0116 01:39:58.621086   10524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0116 01:39:58.672073   10524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0116 01:39:58.707790   10524 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0116 01:39:58.707790   10524 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0116 01:39:58.711907   10524 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0116 01:39:58.712453   10524 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0116 01:39:58.712453   10524 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0116 01:39:58.712453   10524 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:4e:7e Flags:up|broadcast|multicast|running}
	I0116 01:39:58.715316   10524 ip.go:210] interface addr: fe80::d699:fcba:3e2b:1549/64
	I0116 01:39:58.715316   10524 ip.go:210] interface addr: 172.27.112.1/20
	I0116 01:39:58.730215   10524 ssh_runner.go:195] Run: grep 172.27.112.1	host.minikube.internal$ /etc/hosts
	I0116 01:39:58.736212   10524 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 01:39:58.755987   10524 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0116 01:39:58.766480   10524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0116 01:39:58.791228   10524 docker.go:685] Got preloaded images: 
	I0116 01:39:58.791228   10524 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0116 01:39:58.805810   10524 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0116 01:39:58.834683   10524 ssh_runner.go:195] Run: which lz4
	I0116 01:39:58.854291   10524 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 01:39:58.860570   10524 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 01:39:58.860570   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0116 01:40:00.791316   10524 docker.go:649] Took 1.951195 seconds to copy over tarball
	I0116 01:40:00.805306   10524 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 01:40:07.079683   10524 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (6.2743357s)
	I0116 01:40:07.079683   10524 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 01:40:07.152369   10524 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0116 01:40:07.169101   10524 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0116 01:40:07.212577   10524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 01:40:07.392420   10524 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0116 01:40:13.152291   10524 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.7598336s)
	I0116 01:40:13.162491   10524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0116 01:40:13.189800   10524 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0116 01:40:13.189800   10524 cache_images.go:84] Images are preloaded, skipping loading
	I0116 01:40:13.199796   10524 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0116 01:40:13.235052   10524 cni.go:84] Creating CNI manager for ""
	I0116 01:40:13.236045   10524 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0116 01:40:13.236045   10524 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 01:40:13.236045   10524 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.117.123 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-179200 NodeName:addons-179200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.117.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.117.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 01:40:13.236045   10524 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.117.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-179200"
	  kubeletExtraArgs:
	    node-ip: 172.27.117.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.117.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 01:40:13.236045   10524 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-179200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.117.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-179200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 01:40:13.249039   10524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 01:40:13.263830   10524 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 01:40:13.278521   10524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 01:40:13.292772   10524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0116 01:40:13.320281   10524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 01:40:13.349160   10524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0116 01:40:13.389558   10524 ssh_runner.go:195] Run: grep 172.27.117.123	control-plane.minikube.internal$ /etc/hosts
	I0116 01:40:13.395508   10524 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.117.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 01:40:13.412922   10524 certs.go:56] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200 for IP: 172.27.117.123
	I0116 01:40:13.413032   10524 certs.go:190] acquiring lock for shared ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:40:13.413369   10524 certs.go:204] generating minikubeCA CA: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0116 01:40:13.581053   10524 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt ...
	I0116 01:40:13.581053   10524 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt: {Name:mk1d1f25727e6fcaf35d7d74de783ad2d2c6be81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:40:13.583105   10524 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key ...
	I0116 01:40:13.583105   10524 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key: {Name:mkffeaed7182692572a4aaea1f77b60f45c78854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:40:13.584104   10524 certs.go:204] generating proxyClientCA CA: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0116 01:40:13.680120   10524 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0116 01:40:13.680120   10524 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkc09bedb222360a1dcc92648b423932b0197d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:40:13.681771   10524 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key ...
	I0116 01:40:13.681771   10524 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk23d29d7cc073007c63c291d9cf6fa322998d26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:40:13.683423   10524 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.key
	I0116 01:40:13.683609   10524 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt with IP's: []
	I0116 01:40:13.823830   10524 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt ...
	I0116 01:40:13.823830   10524 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: {Name:mk75c9564769329795aa493fed063a6fd300b802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:40:13.825923   10524 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.key ...
	I0116 01:40:13.825923   10524 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.key: {Name:mk6872b2ef8e3a1c268a23f4508a3d1add09f384 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:40:13.826930   10524 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\apiserver.key.bca52dad
	I0116 01:40:13.827432   10524 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\apiserver.crt.bca52dad with IP's: [172.27.117.123 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 01:40:14.067793   10524 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\apiserver.crt.bca52dad ...
	I0116 01:40:14.067793   10524 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\apiserver.crt.bca52dad: {Name:mkf65cc55f6e8488bd88071cc2663f80fec6b845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:40:14.070099   10524 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\apiserver.key.bca52dad ...
	I0116 01:40:14.070099   10524 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\apiserver.key.bca52dad: {Name:mk87f01f869c05ac66c41d8fd17e627c6cda2bc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:40:14.070580   10524 certs.go:337] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\apiserver.crt.bca52dad -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\apiserver.crt
	I0116 01:40:14.082762   10524 certs.go:341] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\apiserver.key.bca52dad -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\apiserver.key
	I0116 01:40:14.083505   10524 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\proxy-client.key
	I0116 01:40:14.084511   10524 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\proxy-client.crt with IP's: []
	I0116 01:40:14.175935   10524 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\proxy-client.crt ...
	I0116 01:40:14.175935   10524 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\proxy-client.crt: {Name:mkb6b41b3c0e09a50c0163d0302c9207183d537f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:40:14.176933   10524 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\proxy-client.key ...
	I0116 01:40:14.176933   10524 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\proxy-client.key: {Name:mk68466d678e6a12d26c6c422c7440ec6799d7e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:40:14.190189   10524 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0116 01:40:14.190801   10524 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0116 01:40:14.190960   10524 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0116 01:40:14.191204   10524 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0116 01:40:14.192504   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 01:40:14.231220   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 01:40:14.269321   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 01:40:14.308172   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0116 01:40:14.355145   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 01:40:14.400153   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 01:40:14.436679   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 01:40:14.474754   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 01:40:14.515906   10524 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 01:40:14.553220   10524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 01:40:14.598744   10524 ssh_runner.go:195] Run: openssl version
	I0116 01:40:14.623575   10524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 01:40:14.653202   10524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 01:40:14.659394   10524 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 01:40 /usr/share/ca-certificates/minikubeCA.pem
	I0116 01:40:14.679835   10524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 01:40:14.701714   10524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 01:40:14.735491   10524 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 01:40:14.740840   10524 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 01:40:14.740840   10524 kubeadm.go:404] StartCluster: {Name:addons-179200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:addons-179200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.117.123 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 01:40:14.753078   10524 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0116 01:40:14.794672   10524 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 01:40:14.823276   10524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 01:40:14.852955   10524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 01:40:14.868933   10524 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 01:40:14.868933   10524 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 01:40:15.144261   10524 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 01:40:27.303226   10524 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 01:40:27.303226   10524 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 01:40:27.303226   10524 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 01:40:27.303226   10524 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 01:40:27.303226   10524 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 01:40:27.304250   10524 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 01:40:27.304250   10524 out.go:204]   - Generating certificates and keys ...
	I0116 01:40:27.305275   10524 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 01:40:27.305275   10524 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 01:40:27.305275   10524 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 01:40:27.305275   10524 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 01:40:27.305275   10524 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 01:40:27.305275   10524 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 01:40:27.305275   10524 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 01:40:27.306231   10524 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-179200 localhost] and IPs [172.27.117.123 127.0.0.1 ::1]
	I0116 01:40:27.306231   10524 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 01:40:27.306231   10524 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-179200 localhost] and IPs [172.27.117.123 127.0.0.1 ::1]
	I0116 01:40:27.306231   10524 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 01:40:27.306231   10524 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 01:40:27.306231   10524 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 01:40:27.306231   10524 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 01:40:27.306231   10524 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 01:40:27.306231   10524 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 01:40:27.307234   10524 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 01:40:27.307234   10524 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 01:40:27.307234   10524 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 01:40:27.307234   10524 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 01:40:27.308239   10524 out.go:204]   - Booting up control plane ...
	I0116 01:40:27.308239   10524 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 01:40:27.308239   10524 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 01:40:27.308239   10524 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 01:40:27.308239   10524 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 01:40:27.309228   10524 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 01:40:27.309228   10524 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 01:40:27.309228   10524 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 01:40:27.309228   10524 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.010532 seconds
	I0116 01:40:27.309228   10524 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 01:40:27.310241   10524 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 01:40:27.310241   10524 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 01:40:27.310241   10524 kubeadm.go:322] [mark-control-plane] Marking the node addons-179200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 01:40:27.310241   10524 kubeadm.go:322] [bootstrap-token] Using token: 679ez5.x1vm96mh06c0dc9k
	I0116 01:40:27.311240   10524 out.go:204]   - Configuring RBAC rules ...
	I0116 01:40:27.311240   10524 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 01:40:27.311240   10524 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 01:40:27.312239   10524 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 01:40:27.312239   10524 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 01:40:27.312239   10524 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 01:40:27.312239   10524 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 01:40:27.313249   10524 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 01:40:27.313249   10524 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 01:40:27.313249   10524 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 01:40:27.313249   10524 kubeadm.go:322] 
	I0116 01:40:27.313249   10524 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 01:40:27.313249   10524 kubeadm.go:322] 
	I0116 01:40:27.313249   10524 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 01:40:27.313249   10524 kubeadm.go:322] 
	I0116 01:40:27.313249   10524 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 01:40:27.313249   10524 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 01:40:27.313249   10524 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 01:40:27.314238   10524 kubeadm.go:322] 
	I0116 01:40:27.314238   10524 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 01:40:27.314238   10524 kubeadm.go:322] 
	I0116 01:40:27.314238   10524 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 01:40:27.314238   10524 kubeadm.go:322] 
	I0116 01:40:27.314238   10524 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 01:40:27.314238   10524 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 01:40:27.314238   10524 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 01:40:27.314238   10524 kubeadm.go:322] 
	I0116 01:40:27.314238   10524 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 01:40:27.315232   10524 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 01:40:27.315232   10524 kubeadm.go:322] 
	I0116 01:40:27.315232   10524 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 679ez5.x1vm96mh06c0dc9k \
	I0116 01:40:27.315232   10524 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 \
	I0116 01:40:27.315232   10524 kubeadm.go:322] 	--control-plane 
	I0116 01:40:27.315232   10524 kubeadm.go:322] 
	I0116 01:40:27.315232   10524 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 01:40:27.315232   10524 kubeadm.go:322] 
	I0116 01:40:27.315232   10524 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 679ez5.x1vm96mh06c0dc9k \
	I0116 01:40:27.316243   10524 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 
	I0116 01:40:27.316243   10524 cni.go:84] Creating CNI manager for ""
	I0116 01:40:27.316243   10524 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0116 01:40:27.317275   10524 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 01:40:27.330235   10524 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 01:40:27.347697   10524 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 01:40:27.405723   10524 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 01:40:27.422931   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:27.422931   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=addons-179200 minikube.k8s.io/updated_at=2024_01_16T01_40_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:27.438451   10524 ops.go:34] apiserver oom_adj: -16
	I0116 01:40:27.746401   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:28.257191   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:28.761609   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:29.259808   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:29.746897   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:30.252580   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:30.752285   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:31.255190   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:31.751895   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:32.255764   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:32.758314   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:33.244307   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:33.750756   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:34.252726   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:34.758540   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:35.258872   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:35.749972   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:36.249715   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:36.757128   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:37.257488   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:37.748065   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:38.251448   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:38.758905   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:39.251064   10524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 01:40:39.375160   10524 kubeadm.go:1088] duration metric: took 11.9692461s to wait for elevateKubeSystemPrivileges.
	I0116 01:40:39.375160   10524 kubeadm.go:406] StartCluster complete in 24.6341602s
	I0116 01:40:39.375460   10524 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:40:39.375460   10524 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 01:40:39.376046   10524 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:40:39.378055   10524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 01:40:39.378055   10524 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0116 01:40:39.378055   10524 addons.go:69] Setting ingress-dns=true in profile "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:69] Setting cloud-spanner=true in profile "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:69] Setting yakd=true in profile "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:69] Setting metrics-server=true in profile "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:69] Setting storage-provisioner=true in profile "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:69] Setting volumesnapshots=true in profile "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:69] Setting registry=true in profile "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:234] Setting addon volumesnapshots=true in "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:234] Setting addon cloud-spanner=true in "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-179200"
	I0116 01:40:39.378055   10524 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:69] Setting inspektor-gadget=true in profile "addons-179200"
	I0116 01:40:39.379100   10524 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-179200"
	I0116 01:40:39.378055   10524 config.go:182] Loaded profile config "addons-179200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 01:40:39.379100   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:39.378055   10524 addons.go:69] Setting gcp-auth=true in profile "addons-179200"
	I0116 01:40:39.379100   10524 mustload.go:65] Loading cluster: addons-179200
	I0116 01:40:39.379100   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:39.378055   10524 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:234] Setting addon yakd=true in "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:69] Setting default-storageclass=true in profile "addons-179200"
	I0116 01:40:39.379100   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:39.379100   10524 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:234] Setting addon storage-provisioner=true in "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:234] Setting addon metrics-server=true in "addons-179200"
	I0116 01:40:39.379100   10524 config.go:182] Loaded profile config "addons-179200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 01:40:39.379100   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:39.378055   10524 addons.go:234] Setting addon ingress-dns=true in "addons-179200"
	I0116 01:40:39.378055   10524 addons.go:234] Setting addon registry=true in "addons-179200"
	I0116 01:40:39.380109   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:39.380109   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:39.379100   10524 addons.go:234] Setting addon inspektor-gadget=true in "addons-179200"
	I0116 01:40:39.380109   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:39.379100   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:39.378055   10524 addons.go:69] Setting helm-tiller=true in profile "addons-179200"
	I0116 01:40:39.381052   10524 addons.go:234] Setting addon helm-tiller=true in "addons-179200"
	I0116 01:40:39.381052   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:39.379100   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:39.378055   10524 addons.go:69] Setting ingress=true in profile "addons-179200"
	I0116 01:40:39.382050   10524 addons.go:234] Setting addon ingress=true in "addons-179200"
	I0116 01:40:39.382050   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:39.382050   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.379100   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:39.383070   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.380109   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.384062   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.387062   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.387062   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.388063   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.389062   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.389062   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.389062   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.389062   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.389062   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.390051   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.390051   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:39.390051   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:40.108884   10524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.112.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 01:40:40.127132   10524 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-179200" context rescaled to 1 replicas
	I0116 01:40:40.127132   10524 start.go:223] Will wait 6m0s for node &{Name: IP:172.27.117.123 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0116 01:40:40.134132   10524 out.go:177] * Verifying Kubernetes components...
	I0116 01:40:40.178135   10524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 01:40:45.575371   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:45.575371   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:45.578370   10524 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0116 01:40:45.583533   10524 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0116 01:40:45.583533   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0116 01:40:45.583604   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:45.647496   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:45.648490   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:45.652022   10524 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0116 01:40:45.653147   10524 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0116 01:40:45.654613   10524 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0116 01:40:45.656847   10524 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0116 01:40:45.668712   10524 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0116 01:40:45.688754   10524 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0116 01:40:45.699751   10524 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0116 01:40:45.704933   10524 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0116 01:40:45.711949   10524 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0116 01:40:45.711949   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0116 01:40:45.711949   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:45.810798   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:45.810798   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:45.811813   10524 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0116 01:40:45.812802   10524 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0116 01:40:45.812802   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0116 01:40:45.812802   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:45.828325   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:45.828325   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:45.847335   10524 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0116 01:40:45.848342   10524 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 01:40:45.848342   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0116 01:40:45.848342   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:45.870069   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:45.870069   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:45.871333   10524 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 01:40:45.873491   10524 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 01:40:45.873491   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 01:40:45.873491   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:45.896674   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:45.896674   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:45.896674   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:46.041797   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:46.041797   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:46.057183   10524 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0116 01:40:46.057808   10524 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0116 01:40:46.057808   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0116 01:40:46.057808   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:46.084856   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:46.084856   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:46.087123   10524 addons.go:234] Setting addon default-storageclass=true in "addons-179200"
	I0116 01:40:46.087123   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:46.089242   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:46.180245   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:46.180245   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:46.182266   10524 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0116 01:40:46.183251   10524 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0116 01:40:46.183251   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0116 01:40:46.183251   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:46.193247   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:46.193247   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:46.208239   10524 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0116 01:40:46.212238   10524 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 01:40:46.212238   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0116 01:40:46.212238   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:46.321604   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:46.321604   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:46.322605   10524 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I0116 01:40:46.324638   10524 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0116 01:40:46.324638   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0116 01:40:46.324638   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:46.497220   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:46.497220   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:46.500226   10524 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0116 01:40:46.501214   10524 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0116 01:40:46.501214   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0116 01:40:46.501214   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:46.512203   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:46.512203   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:46.513226   10524 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0116 01:40:46.513226   10524 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 01:40:46.523387   10524 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 01:40:46.524558   10524 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 01:40:46.524558   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0116 01:40:46.524558   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:46.701814   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:46.701814   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:46.704293   10524 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-179200"
	I0116 01:40:46.704293   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:40:46.708309   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:46.835295   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:46.835295   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:46.848294   10524 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0116 01:40:46.854290   10524 out.go:177]   - Using image docker.io/registry:2.8.3
	I0116 01:40:46.855283   10524 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0116 01:40:46.855283   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0116 01:40:46.855283   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:47.033558   10524 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.112.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.9246287s)
	I0116 01:40:47.033558   10524 start.go:929] {"host.minikube.internal": 172.27.112.1} host record injected into CoreDNS's ConfigMap
	I0116 01:40:47.033558   10524 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (6.8553789s)
	I0116 01:40:47.035560   10524 node_ready.go:35] waiting up to 6m0s for node "addons-179200" to be "Ready" ...
	I0116 01:40:47.225703   10524 node_ready.go:49] node "addons-179200" has status "Ready":"True"
	I0116 01:40:47.226676   10524 node_ready.go:38] duration metric: took 191.1154ms waiting for node "addons-179200" to be "Ready" ...
	I0116 01:40:47.226676   10524 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 01:40:47.265929   10524 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-d98v5" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:49.797031   10524 pod_ready.go:102] pod "coredns-5dd5756b68-d98v5" in "kube-system" namespace has status "Ready":"False"
	I0116 01:40:51.200421   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:51.200421   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:51.200421   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:51.397667   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:51.397667   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:51.397667   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:51.482376   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:51.482376   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:51.482376   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:51.799683   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:51.799683   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:51.799683   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:51.891729   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:51.891729   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:51.891729   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:51.900456   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:51.900456   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:51.900456   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:52.183279   10524 pod_ready.go:102] pod "coredns-5dd5756b68-d98v5" in "kube-system" namespace has status "Ready":"False"
	I0116 01:40:52.345636   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:52.345636   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:52.345636   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:52.610947   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:52.610947   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:52.610947   10524 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 01:40:52.610947   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 01:40:52.610947   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:52.620949   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:52.620949   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:52.620949   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:52.704945   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:52.704945   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:52.704945   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:53.361028   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:53.361218   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:53.361218   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:53.451283   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:53.451283   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:53.451283   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:53.480851   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:53.480851   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:53.480851   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:54.324925   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:54.324925   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:54.328536   10524 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0116 01:40:54.330189   10524 out.go:177]   - Using image docker.io/busybox:stable
	I0116 01:40:54.331456   10524 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 01:40:54.331552   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0116 01:40:54.331552   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:54.517047   10524 pod_ready.go:102] pod "coredns-5dd5756b68-d98v5" in "kube-system" namespace has status "Ready":"False"
	I0116 01:40:54.898195   10524 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0116 01:40:54.898195   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:40:56.544653   10524 pod_ready.go:102] pod "coredns-5dd5756b68-d98v5" in "kube-system" namespace has status "Ready":"False"
	I0116 01:40:58.332020   10524 pod_ready.go:92] pod "coredns-5dd5756b68-d98v5" in "kube-system" namespace has status "Ready":"True"
	I0116 01:40:58.332125   10524 pod_ready.go:81] duration metric: took 11.0661239s waiting for pod "coredns-5dd5756b68-d98v5" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:58.332125   10524 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n4w25" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:58.395697   10524 pod_ready.go:92] pod "coredns-5dd5756b68-n4w25" in "kube-system" namespace has status "Ready":"True"
	I0116 01:40:58.395697   10524 pod_ready.go:81] duration metric: took 63.5715ms waiting for pod "coredns-5dd5756b68-n4w25" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:58.395697   10524 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-179200" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:58.464709   10524 pod_ready.go:92] pod "etcd-addons-179200" in "kube-system" namespace has status "Ready":"True"
	I0116 01:40:58.464709   10524 pod_ready.go:81] duration metric: took 69.0111ms waiting for pod "etcd-addons-179200" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:58.464709   10524 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-179200" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:58.489169   10524 pod_ready.go:92] pod "kube-apiserver-addons-179200" in "kube-system" namespace has status "Ready":"True"
	I0116 01:40:58.489169   10524 pod_ready.go:81] duration metric: took 24.4606ms waiting for pod "kube-apiserver-addons-179200" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:58.489293   10524 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-179200" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:58.507090   10524 pod_ready.go:92] pod "kube-controller-manager-addons-179200" in "kube-system" namespace has status "Ready":"True"
	I0116 01:40:58.507090   10524 pod_ready.go:81] duration metric: took 17.7964ms waiting for pod "kube-controller-manager-addons-179200" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:58.507090   10524 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j7sl4" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:58.595902   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:40:58.595902   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:58.595902   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:40:58.687105   10524 pod_ready.go:92] pod "kube-proxy-j7sl4" in "kube-system" namespace has status "Ready":"True"
	I0116 01:40:58.687105   10524 pod_ready.go:81] duration metric: took 180.0143ms waiting for pod "kube-proxy-j7sl4" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:58.687105   10524 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-179200" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:58.706121   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:40:58.706181   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:58.706436   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:40:58.766895   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:40:58.766895   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:58.766895   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:40:58.831183   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:40:58.831183   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:58.831183   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:40:58.878434   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:40:58.878614   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:58.878861   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:40:58.927794   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:40:58.927891   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:58.928276   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:40:58.950884   10524 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0116 01:40:58.951054   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0116 01:40:58.997238   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:40:58.997238   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:58.998267   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:40:59.014246   10524 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0116 01:40:59.014363   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0116 01:40:59.048306   10524 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0116 01:40:59.048379   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0116 01:40:59.058006   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:40:59.058006   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:59.058006   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:40:59.087323   10524 pod_ready.go:92] pod "kube-scheduler-addons-179200" in "kube-system" namespace has status "Ready":"True"
	I0116 01:40:59.087323   10524 pod_ready.go:81] duration metric: took 400.2154ms waiting for pod "kube-scheduler-addons-179200" in "kube-system" namespace to be "Ready" ...
	I0116 01:40:59.087323   10524 pod_ready.go:38] duration metric: took 11.8605697s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 01:40:59.087323   10524 api_server.go:52] waiting for apiserver process to appear ...
	I0116 01:40:59.088298   10524 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0116 01:40:59.088383   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0116 01:40:59.097706   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:59.097706   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:59.097706   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:59.110895   10524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 01:40:59.178302   10524 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0116 01:40:59.178302   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0116 01:40:59.188319   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0116 01:40:59.230843   10524 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0116 01:40:59.230900   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0116 01:40:59.243372   10524 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0116 01:40:59.243372   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0116 01:40:59.307084   10524 api_server.go:72] duration metric: took 19.1798274s to wait for apiserver process to appear ...
	I0116 01:40:59.307598   10524 api_server.go:88] waiting for apiserver healthz status ...
	I0116 01:40:59.307685   10524 api_server.go:253] Checking apiserver healthz at https://172.27.117.123:8443/healthz ...
	I0116 01:40:59.326771   10524 api_server.go:279] https://172.27.117.123:8443/healthz returned 200:
	ok
	I0116 01:40:59.330015   10524 api_server.go:141] control plane version: v1.28.4
	I0116 01:40:59.330015   10524 api_server.go:131] duration metric: took 22.4164ms to wait for apiserver health ...
	I0116 01:40:59.330015   10524 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 01:40:59.342985   10524 system_pods.go:59] 7 kube-system pods found
	I0116 01:40:59.342985   10524 system_pods.go:61] "coredns-5dd5756b68-d98v5" [1f130429-4312-4566-a4fa-95d5716aa16a] Running
	I0116 01:40:59.342985   10524 system_pods.go:61] "coredns-5dd5756b68-n4w25" [8441bd01-8aaf-4d0b-9288-6f5f2c9beec6] Running
	I0116 01:40:59.342985   10524 system_pods.go:61] "etcd-addons-179200" [bea6f43f-ee63-44d9-b791-99ff1f473840] Running
	I0116 01:40:59.342985   10524 system_pods.go:61] "kube-apiserver-addons-179200" [0cc64fa4-b698-477d-83a9-65645fc79bd8] Running
	I0116 01:40:59.342985   10524 system_pods.go:61] "kube-controller-manager-addons-179200" [a328bea2-77d9-43bd-8dcd-de14e65dbe4c] Running
	I0116 01:40:59.342985   10524 system_pods.go:61] "kube-proxy-j7sl4" [81d9b903-cdad-481c-ae6d-8d889272cdb7] Running
	I0116 01:40:59.342985   10524 system_pods.go:61] "kube-scheduler-addons-179200" [a1a5528a-a421-4058-96fc-bc9f384ee23d] Running
	I0116 01:40:59.342985   10524 system_pods.go:74] duration metric: took 12.9698ms to wait for pod list to return data ...
	I0116 01:40:59.342985   10524 default_sa.go:34] waiting for default service account to be created ...
	I0116 01:40:59.363980   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0116 01:40:59.372754   10524 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 01:40:59.372754   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0116 01:40:59.374745   10524 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0116 01:40:59.374745   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0116 01:40:59.394764   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 01:40:59.425766   10524 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0116 01:40:59.425766   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0116 01:40:59.478267   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0116 01:40:59.485176   10524 default_sa.go:45] found service account: "default"
	I0116 01:40:59.485176   10524 default_sa.go:55] duration metric: took 142.1901ms for default service account to be created ...
	I0116 01:40:59.485176   10524 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 01:40:59.504175   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0116 01:40:59.505178   10524 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0116 01:40:59.505178   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0116 01:40:59.599173   10524 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0116 01:40:59.599173   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0116 01:40:59.663759   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0116 01:40:59.692405   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:59.692610   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:59.692610   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:40:59.712018   10524 system_pods.go:86] 7 kube-system pods found
	I0116 01:40:59.712018   10524 system_pods.go:89] "coredns-5dd5756b68-d98v5" [1f130429-4312-4566-a4fa-95d5716aa16a] Running
	I0116 01:40:59.712018   10524 system_pods.go:89] "coredns-5dd5756b68-n4w25" [8441bd01-8aaf-4d0b-9288-6f5f2c9beec6] Running
	I0116 01:40:59.712018   10524 system_pods.go:89] "etcd-addons-179200" [bea6f43f-ee63-44d9-b791-99ff1f473840] Running
	I0116 01:40:59.712018   10524 system_pods.go:89] "kube-apiserver-addons-179200" [0cc64fa4-b698-477d-83a9-65645fc79bd8] Running
	I0116 01:40:59.712018   10524 system_pods.go:89] "kube-controller-manager-addons-179200" [a328bea2-77d9-43bd-8dcd-de14e65dbe4c] Running
	I0116 01:40:59.712018   10524 system_pods.go:89] "kube-proxy-j7sl4" [81d9b903-cdad-481c-ae6d-8d889272cdb7] Running
	I0116 01:40:59.712018   10524 system_pods.go:89] "kube-scheduler-addons-179200" [a1a5528a-a421-4058-96fc-bc9f384ee23d] Running
	I0116 01:40:59.712018   10524 system_pods.go:126] duration metric: took 226.8408ms to wait for k8s-apps to be running ...
	I0116 01:40:59.712018   10524 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 01:40:59.734017   10524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 01:40:59.755140   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:40:59.755398   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:59.755398   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:40:59.794250   10524 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0116 01:40:59.794250   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0116 01:40:59.817896   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:40:59.817896   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:59.817896   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:40:59.881352   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:40:59.881499   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:59.881765   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:40:59.897618   10524 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0116 01:40:59.897618   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0116 01:40:59.953993   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:40:59.953993   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:59.953993   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:40:59.996096   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:40:59.996154   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:40:59.996206   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:41:00.123873   10524 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0116 01:41:00.123930   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0116 01:41:00.252050   10524 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0116 01:41:00.252050   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0116 01:41:00.347404   10524 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0116 01:41:00.347526   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0116 01:41:00.377740   10524 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0116 01:41:00.377740   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0116 01:41:00.428235   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0116 01:41:00.441324   10524 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0116 01:41:00.441324   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0116 01:41:00.447973   10524 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0116 01:41:00.448066   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0116 01:41:00.589686   10524 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0116 01:41:00.589757   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0116 01:41:00.595453   10524 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0116 01:41:00.595632   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0116 01:41:00.621888   10524 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 01:41:00.622036   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0116 01:41:00.636964   10524 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 01:41:00.637033   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0116 01:41:00.685699   10524 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0116 01:41:00.685699   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0116 01:41:00.745202   10524 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0116 01:41:00.745329   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0116 01:41:00.815762   10524 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0116 01:41:00.815762   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0116 01:41:00.817069   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0116 01:41:00.854795   10524 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0116 01:41:00.854929   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0116 01:41:00.875156   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 01:41:00.968343   10524 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 01:41:00.968461   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0116 01:41:01.044152   10524 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0116 01:41:01.044152   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0116 01:41:01.065143   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0116 01:41:01.224159   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0116 01:41:01.317528   10524 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0116 01:41:01.317528   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0116 01:41:01.557542   10524 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0116 01:41:01.557640   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0116 01:41:01.903013   10524 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 01:41:01.903077   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0116 01:41:02.070581   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0116 01:41:02.115378   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:41:02.115440   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:41:02.115657   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:41:02.547807   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:41:02.547807   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:41:02.547807   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:41:02.807321   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:41:02.807488   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:41:02.807776   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:41:02.962520   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 01:41:03.272173   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0116 01:41:03.735402   10524 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0116 01:41:04.042100   10524 addons.go:234] Setting addon gcp-auth=true in "addons-179200"
	I0116 01:41:04.042304   10524 host.go:66] Checking if "addons-179200" exists ...
	I0116 01:41:04.043675   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:41:04.687104   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.3230885s)
	I0116 01:41:04.687104   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.4987487s)
	I0116 01:41:06.343142   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:41:06.343142   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:41:06.361770   10524 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0116 01:41:06.361770   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-179200 ).state
	I0116 01:41:08.964871   10524 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 01:41:08.965089   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:41:08.965089   10524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-179200 ).networkadapters[0]).ipaddresses[0]
	I0116 01:41:09.663298   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.1849646s)
	I0116 01:41:09.663298   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.1590568s)
	I0116 01:41:09.663298   10524 addons.go:470] Verifying addon metrics-server=true in "addons-179200"
	I0116 01:41:09.663298   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.2684667s)
	I0116 01:41:10.037732   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.3739062s)
	I0116 01:41:10.037832   10524 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (10.303748s)
	I0116 01:41:10.037940   10524 system_svc.go:56] duration metric: took 10.3258545s WaitForService to wait for kubelet.
	I0116 01:41:10.037973   10524 kubeadm.go:581] duration metric: took 29.9106134s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 01:41:10.038703   10524 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-179200 service yakd-dashboard -n yakd-dashboard
	
	I0116 01:41:10.038034   10524 node_conditions.go:102] verifying NodePressure condition ...
	I0116 01:41:10.183307   10524 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 01:41:10.183307   10524 node_conditions.go:123] node cpu capacity is 2
	I0116 01:41:10.183307   10524 node_conditions.go:105] duration metric: took 144.074ms to run NodePressure ...
	I0116 01:41:10.183307   10524 start.go:228] waiting for startup goroutines ...
	I0116 01:41:11.703384   10524 main.go:141] libmachine: [stdout =====>] : 172.27.117.123
	
	I0116 01:41:11.703661   10524 main.go:141] libmachine: [stderr =====>] : 
	I0116 01:41:11.703848   10524 sshutil.go:53] new ssh client: &{IP:172.27.117.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-179200\id_rsa Username:docker}
	I0116 01:41:12.343036   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.9146658s)
	I0116 01:41:12.343138   10524 addons.go:470] Verifying addon ingress=true in "addons-179200"
	I0116 01:41:12.343138   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (11.525942s)
	I0116 01:41:12.344329   10524 out.go:177] * Verifying ingress addon...
	I0116 01:41:12.343250   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.4680193s)
	I0116 01:41:12.343359   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.2780805s)
	I0116 01:41:12.344598   10524 addons.go:470] Verifying addon registry=true in "addons-179200"
	I0116 01:41:12.345631   10524 out.go:177] * Verifying registry addon...
	W0116 01:41:12.345156   10524 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 01:41:12.345773   10524 retry.go:31] will retry after 321.644069ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0116 01:41:12.348948   10524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0116 01:41:12.349105   10524 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0116 01:41:12.358516   10524 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0116 01:41:12.358516   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:12.361460   10524 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0116 01:41:12.361491   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:12.688858   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0116 01:41:12.870574   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:12.880056   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:13.368427   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:13.372513   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:13.876489   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:13.876791   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:14.210943   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (12.9866428s)
	I0116 01:41:14.211001   10524 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-179200"
	I0116 01:41:14.211076   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (12.1403414s)
	I0116 01:41:14.212171   10524 out.go:177] * Verifying csi-hostpath-driver addon...
	I0116 01:41:14.211148   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.2485544s)
	I0116 01:41:14.211281   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.9390371s)
	I0116 01:41:14.211381   10524 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (7.84956s)
	I0116 01:41:14.213305   10524 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0116 01:41:14.214303   10524 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0116 01:41:14.215057   10524 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0116 01:41:14.215203   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0116 01:41:14.215477   10524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0116 01:41:14.323708   10524 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0116 01:41:14.323763   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:14.329140   10524 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0116 01:41:14.329140   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	W0116 01:41:14.421404   10524 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I0116 01:41:14.438927   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:14.444609   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:14.449719   10524 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 01:41:14.449785   10524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0116 01:41:14.593134   10524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0116 01:41:14.730174   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:14.877834   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:14.880230   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:15.237591   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:15.365176   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:15.368821   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:15.728455   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:15.871109   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:15.871209   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:16.238323   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:16.359124   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:16.360647   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:16.729460   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:16.858372   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:16.860409   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:17.102527   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.4136409s)
	I0116 01:41:17.239305   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:17.370931   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:17.370931   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:17.693017   10524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.0988897s)
	I0116 01:41:17.701670   10524 addons.go:470] Verifying addon gcp-auth=true in "addons-179200"
	I0116 01:41:17.702244   10524 out.go:177] * Verifying gcp-auth addon...
	I0116 01:41:17.704390   10524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0116 01:41:17.712511   10524 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0116 01:41:17.712511   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:17.733397   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:17.861973   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:17.867699   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:18.220573   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:18.228097   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:18.370733   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:18.382064   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:18.716576   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:18.723794   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:18.871930   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:18.875939   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:19.220979   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:19.227360   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:19.364816   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:19.378621   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:19.729204   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:19.732268   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:19.871080   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:19.871377   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:20.214594   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:20.235358   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:20.361186   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:20.361506   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:20.710364   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:20.726916   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:20.875498   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:20.875498   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:21.217786   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:21.227243   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:21.367300   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:21.371554   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:21.711982   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:21.729307   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:21.873966   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:21.875156   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:22.219274   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:22.225341   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:22.366398   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:22.366398   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:22.712455   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:22.732696   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:22.857007   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:22.858386   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:23.221316   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:23.225360   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:23.368947   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:23.368947   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:23.712099   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:23.730568   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:23.860203   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:23.868203   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:24.223346   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:24.237320   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:24.367784   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:24.368775   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:24.715150   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:24.735391   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:24.861554   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:24.868086   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:25.222382   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:25.225355   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:25.369432   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:25.369652   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:25.714081   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:25.734398   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:25.857442   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:25.858440   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:26.229633   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:26.243443   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:26.367304   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:26.370144   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:26.714340   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:26.732118   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:26.859758   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:26.859758   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:27.223511   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:27.224490   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:27.366476   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:27.367028   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:27.712636   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:27.730358   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:27.860003   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:27.863472   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:28.226846   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:28.228382   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:28.369631   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:28.370607   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:28.718472   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:28.723702   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:28.862529   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:28.862529   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:29.227905   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:29.228967   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:29.369829   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:29.370445   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:29.720374   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:29.727180   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:30.425521   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:30.426811   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:30.431511   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:30.432257   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:30.435827   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:30.438783   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:30.710623   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:30.732982   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:30.859083   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:30.860829   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:31.214340   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:31.234307   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:31.358666   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:31.361023   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:31.720775   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:31.727214   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:31.864465   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:31.865004   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:32.210930   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:32.227723   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:32.369571   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:32.372506   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:32.714874   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:32.733442   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:32.862725   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:32.863732   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:33.223380   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:33.235125   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:33.368884   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:33.369419   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:33.715945   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:33.724186   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:33.860517   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:33.861604   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:34.224641   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:34.228249   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:34.368658   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:34.368658   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:34.717733   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:34.724276   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:34.862905   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:34.863451   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:35.213039   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:35.230661   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:35.357879   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:35.358629   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:35.724036   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:35.725995   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:35.867273   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:35.867994   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:36.216378   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:36.223236   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:36.359538   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:36.359665   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:36.725050   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:36.727134   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:36.867466   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:36.870526   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:37.215872   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:37.223456   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:37.361979   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:37.361979   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:37.726552   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:37.729298   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:37.870758   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:37.873939   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:38.218209   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:38.224857   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:38.363329   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:38.364054   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:38.722446   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:38.726452   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:38.866520   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:38.868269   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:39.221277   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:39.224492   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:39.371313   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:39.371413   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:39.725235   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:39.728928   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:39.876583   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:39.879492   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:40.216125   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:40.223187   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:40.364538   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:40.369170   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:40.727205   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:40.727972   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:40.923329   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:40.923933   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:41.218321   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:41.223454   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:41.365728   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:41.370586   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:41.725411   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:41.729068   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:41.871087   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:41.873581   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:42.216333   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:42.224969   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:42.361275   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:42.361355   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:42.726135   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:42.728910   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:42.880597   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:42.881237   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:43.218185   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:43.224351   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:43.364562   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:43.367752   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:43.710716   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:43.728935   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:43.868725   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:43.871470   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:44.216035   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:44.223607   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:44.361449   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:44.363013   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:44.728049   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:44.729202   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:44.871741   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:44.875079   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:45.216208   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:45.223839   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:45.366135   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:45.370729   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:45.724843   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:45.726848   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:45.868658   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:45.870390   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:46.217128   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:46.225625   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:46.365421   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:46.366596   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:46.711526   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:46.729607   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:46.856637   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:46.857089   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:47.210150   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:47.225727   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:47.367227   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:47.368708   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:47.715274   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:47.731437   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:47.858260   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:47.859153   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:48.220141   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:48.246053   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:48.361752   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:48.362442   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:48.725011   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:48.727720   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:48.871443   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:48.872306   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:49.218591   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:49.224942   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:49.362681   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:49.363091   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:49.712040   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:49.730588   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:49.873339   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:49.874053   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:50.219480   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:50.228178   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:50.364535   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:50.364535   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:50.714664   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:50.732200   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:50.856736   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:50.856990   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:51.218539   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:51.225553   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:51.361159   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:51.361159   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:51.727781   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:51.728751   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:51.871731   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:51.873040   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:52.217770   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:52.224599   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:52.362592   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:52.365156   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:52.711127   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:52.728749   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:52.871839   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:52.872101   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:53.214977   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:53.235236   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:53.359892   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:53.361197   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:53.728579   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:53.752664   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:53.867472   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:53.873331   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:54.215965   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:54.221559   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:54.358148   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:54.360355   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:54.724256   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:54.726742   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:54.868322   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:54.868322   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:55.215425   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:55.232255   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:55.360147   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:55.360147   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:55.726768   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:55.727730   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:55.868743   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:55.868743   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:56.217304   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:56.223043   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:56.362443   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:56.362901   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:56.725241   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:56.728343   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:56.868538   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:56.870130   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:57.215298   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:57.231625   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:57.363036   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:57.363036   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:57.719157   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:57.724541   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:57.860071   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:57.860605   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:58.222761   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:58.235268   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:58.418016   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:58.418016   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:58.724425   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:58.726662   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:58.871785   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:58.872878   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:59.213785   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:59.227880   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:41:59.368678   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:59.368678   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:59.966537   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:41:59.970523   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:41:59.974119   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:41:59.978994   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:00.212460   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:00.233560   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:00.373451   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:00.373763   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:00.718989   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:00.731342   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:00.862149   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:00.865222   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:01.210735   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:01.229075   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:01.372278   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:01.372827   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:01.719818   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:01.726830   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:01.865661   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:01.868241   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:02.212020   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:02.227839   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:02.369984   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:02.370288   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:02.713487   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:02.728066   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:02.873100   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:02.874787   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:03.217244   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:03.228813   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:03.371633   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:03.374015   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:03.710204   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:03.727229   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:03.876257   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:03.876317   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:04.218311   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:04.225019   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:04.370370   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:04.373462   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:04.711044   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:04.729684   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:04.872946   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:04.873675   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:05.219813   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:05.228075   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:05.364618   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:05.365796   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:05.714867   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:05.732858   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:05.862761   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:05.863794   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:06.215968   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:06.224538   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:06.360511   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:06.361849   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:06.716245   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:06.727810   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:06.861761   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:06.862349   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:07.225412   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:07.226589   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:07.366137   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:07.366965   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:07.717318   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:07.723940   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:07.861113   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:07.863117   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:08.226620   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:08.229622   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:08.387486   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:08.387486   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:08.718897   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:08.725854   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:08.865315   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:08.866303   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:09.262507   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:09.274375   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:09.363005   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:09.363999   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:09.726189   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:09.727132   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:09.865743   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:09.866663   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:10.217957   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:10.239586   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:10.373304   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:10.373648   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:10.718145   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:10.725065   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:10.864104   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:10.867564   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:11.211566   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:11.228874   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:11.372063   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:11.373070   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:11.724666   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:11.725597   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:11.867005   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:11.867731   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:12.213293   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:12.232952   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:12.359609   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:12.359609   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:12.725550   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:12.728180   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:12.866723   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:12.867125   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:13.211003   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:13.228649   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:13.371820   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:13.372236   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:13.717211   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:13.722883   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:13.862471   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:13.864464   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:14.210699   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:14.227860   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:14.359502   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:14.360489   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:14.716053   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:14.722771   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:14.860133   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:14.863455   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:15.226568   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:15.229476   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:15.371944   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:15.371944   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:15.715776   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:15.731361   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:15.861724   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:15.861724   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:16.226806   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:16.231643   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:16.418470   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:16.418912   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:16.770115   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:16.776800   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:17.016740   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:17.018911   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:17.214661   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:17.229522   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:17.368963   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:17.371108   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:17.725141   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:17.727947   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:17.865948   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:17.866378   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:18.214713   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:18.237728   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:18.361927   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:18.361927   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:18.727816   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:18.728188   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:18.869618   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:18.872199   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:19.215919   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:19.232066   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:19.359443   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:19.370836   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:19.722953   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:19.726354   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:19.861778   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:19.861778   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:20.210281   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:20.228306   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:20.378162   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:20.379759   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:20.719784   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:20.725874   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:20.866757   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:20.868515   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:21.214717   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:21.231274   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:21.357877   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:21.358607   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:21.727751   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:21.730981   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:21.867990   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:21.870778   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:22.215497   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:22.233860   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:22.360675   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:22.360941   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:22.724774   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:22.727040   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:22.868177   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:22.870117   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:23.217092   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:23.224633   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:23.362077   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:23.363249   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:23.713434   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:23.728004   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:23.857811   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:23.858628   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:24.226531   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:24.227496   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:24.370432   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:24.370514   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:24.715355   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:24.733043   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:24.861434   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:24.861719   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:25.220639   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:25.228146   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:25.366007   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:25.367416   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:25.715672   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:25.732235   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:25.859863   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:25.862284   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:26.221994   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:26.228888   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:26.359344   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:26.361211   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:26.717544   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:26.743549   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:27.023368   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:27.023565   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:27.215552   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:27.465242   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:27.467675   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:27.470234   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:27.717650   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:27.723794   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:27.866779   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:27.866779   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:28.227189   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:28.229585   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:28.371002   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:28.371363   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:28.716939   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:28.728790   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:28.860158   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:28.865159   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:29.227621   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:29.231057   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:29.365100   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:29.367233   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:29.712728   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:29.729981   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:29.859324   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:29.859572   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:30.224134   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:30.228009   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:30.366514   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:30.366672   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:30.711791   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:30.733834   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:31.071432   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:31.074738   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:31.215766   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:31.232881   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:31.358038   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:31.359983   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:31.720605   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:31.742862   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:31.861864   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:31.866462   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:32.225326   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:32.226390   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:32.368855   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:32.369532   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:32.717374   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:32.724809   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:32.863271   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:32.864237   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:33.224870   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:33.230955   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:33.372604   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:33.376318   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:33.720427   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:33.726718   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:33.865096   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:33.867788   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:34.229469   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:34.230567   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:34.368164   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:34.373063   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:34.716471   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:34.726080   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:34.858881   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:34.859244   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:35.218479   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:35.235992   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:35.376803   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:35.397589   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:35.710376   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:35.726813   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:35.868658   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:35.869644   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:36.217859   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:36.225149   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:36.363263   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:36.363263   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:36.710717   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:36.729340   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:36.871134   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:36.871791   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:37.218853   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:37.227542   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:37.359001   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:37.360794   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:37.725858   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:37.728838   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:37.866172   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:37.867060   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:38.226181   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:38.237951   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:38.373958   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:38.379827   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:38.719864   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:38.727133   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:38.864652   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:38.867247   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:39.378379   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:39.378823   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:39.381447   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:39.382786   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:39.721676   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:39.725558   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:39.866202   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:39.867200   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:40.229088   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:40.230055   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:40.365833   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:40.366838   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:40.713027   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:40.729912   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:40.871492   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:40.873447   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:41.217617   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:41.224505   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:41.363120   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:41.366074   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:41.712108   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:41.730489   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:41.856695   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:41.859058   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:42.217482   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:42.223884   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:42.362464   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:42.363134   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:42.711107   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:42.727038   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:42.872188   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:42.872667   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:43.219726   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:43.226361   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:43.370521   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:43.370703   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:43.712830   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:43.727167   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:43.871671   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:43.874294   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:44.221752   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:44.230964   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:44.361026   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:44.363708   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:44.719897   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:44.732591   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:44.870872   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:44.876659   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:45.227247   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:45.233140   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:45.432822   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:45.436228   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:45.801754   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:45.815766   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:45.876029   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:45.876029   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:46.212661   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:46.230630   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:46.358667   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:46.358985   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:46.723078   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:46.727744   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:46.866389   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:46.866551   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:47.211448   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:47.228028   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:47.371462   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:47.372033   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:47.724746   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:47.730719   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:47.864585   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:47.865302   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:48.211230   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:48.229980   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:48.359553   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:48.359844   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:48.722690   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:48.727692   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:48.866506   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:48.866506   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:49.213169   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:49.228863   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:49.357460   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:49.357460   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:49.726155   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:49.726155   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:49.868702   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:49.868702   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:50.216623   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:50.222961   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:50.363989   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:50.364040   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:50.712199   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:50.728384   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:50.857847   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:50.861073   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:51.218313   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:51.225912   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:51.364636   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:51.364941   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:51.712256   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:51.729281   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:51.870530   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:51.870530   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:52.220941   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:52.227282   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:52.365602   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:52.368318   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:52.711921   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:52.729450   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:52.859279   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:52.859392   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:53.225788   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:53.236384   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:53.371413   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:53.371413   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:53.718144   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:53.723770   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:53.863054   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:53.863634   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:54.225639   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:54.229223   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:54.365642   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:54.365642   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:54.714112   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:54.732367   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:54.857284   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:54.857938   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:55.217795   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:55.224652   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:55.362835   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:55.367173   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:55.967124   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:55.968758   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:55.969721   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:55.972644   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:56.227603   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:56.227603   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:56.368582   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:56.371022   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:56.712586   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:56.732016   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:56.858652   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:56.859444   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0116 01:42:57.221513   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:57.227917   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:57.364536   10524 kapi.go:107] duration metric: took 1m45.0149777s to wait for kubernetes.io/minikube-addons=registry ...
	I0116 01:42:57.367660   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:57.713149   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:57.730168   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:57.916545   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:58.219167   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:58.225012   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:58.360238   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:58.723726   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:58.726137   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:58.864771   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:59.225845   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:59.229166   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:59.363117   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:42:59.726163   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:42:59.728305   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:42:59.868533   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:00.282208   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:00.297493   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:00.375600   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:00.720084   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:00.726781   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:00.870965   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:01.219543   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:01.226074   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:01.362444   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:01.725581   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:01.728209   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:01.867730   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:02.220130   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:02.230231   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:02.362878   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:02.726690   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:02.733511   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:02.867788   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:03.227876   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:03.233480   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:03.360752   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:03.732758   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:03.734678   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:03.870516   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:04.233066   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:04.235279   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:04.361276   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:04.712495   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:04.731129   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:04.855743   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:05.228160   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:05.229147   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:05.370684   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:05.716125   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:05.736730   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:05.858943   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:06.224829   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:06.227656   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:06.369306   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:06.715276   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:06.735285   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:06.857363   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:07.224677   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:07.226661   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:07.367580   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:07.717868   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:07.724487   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:07.862021   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:08.219574   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:08.225418   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:08.359274   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:08.720275   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:08.727109   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:08.864444   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:09.210681   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:09.228507   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:09.371045   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:09.710426   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:09.737831   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:09.863401   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:10.227098   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:10.228510   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:10.370300   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:10.719325   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:10.726756   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:10.865682   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:11.225252   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:11.228014   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:11.366000   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:11.712442   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:11.729937   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:11.858675   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:12.227505   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:12.232122   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:12.363419   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:12.725882   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:12.726209   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:12.866397   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:13.225262   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:13.227755   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:13.367240   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:13.712923   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:13.737747   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:13.859576   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:14.220362   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:14.229179   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:14.363988   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:14.714948   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:14.732035   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:14.857473   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:15.225600   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:15.225897   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:15.374162   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:15.715750   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:15.729623   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:15.857120   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:16.223909   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:16.225901   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:16.367812   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:16.718951   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:16.724294   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:16.857362   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:17.226910   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:17.230521   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:17.366573   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:17.714702   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:17.733248   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:17.861736   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:18.220643   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:18.232957   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:18.366540   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:18.765943   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:18.769689   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0116 01:43:18.874536   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:19.212130   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:19.228514   10524 kapi.go:107] duration metric: took 2m5.0122245s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0116 01:43:19.377869   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:19.715704   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:19.871327   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:20.215713   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:20.356493   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:20.714883   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:20.870152   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:21.211958   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:21.369338   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:21.715080   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:21.870446   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:22.225483   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:22.367482   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:22.715009   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:22.873941   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:23.225165   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:23.364636   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:23.713310   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:23.857766   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:24.217621   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:24.360132   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:24.719839   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:24.860826   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:25.215159   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:25.358753   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:25.719383   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:25.858064   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:26.216640   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:26.357635   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:26.716842   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:26.865144   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:27.224385   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:27.365022   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:27.712360   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:27.870701   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:28.214250   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:28.359494   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:28.716794   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:28.859403   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:29.226122   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:29.370414   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:29.719751   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:29.863346   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:30.221002   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:30.366564   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:30.714730   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:30.857665   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:31.225084   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:31.369566   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:31.717140   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:31.861610   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:32.224053   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:32.370455   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:32.722399   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:32.862458   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:33.211400   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:33.375420   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:33.721008   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:33.864116   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:34.215035   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:34.470198   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:34.717377   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:34.858672   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:35.223394   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:35.368488   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:35.718726   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:35.858927   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:36.220595   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:36.366902   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:36.714142   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:36.859779   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:37.225099   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:37.369426   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:37.718032   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:37.861258   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:38.224815   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:38.366559   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:38.716163   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:38.860414   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:39.219817   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:39.363959   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:39.713260   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:39.857536   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:40.222063   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:40.366940   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:40.716530   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:40.872181   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:41.221716   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:41.366385   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:41.715334   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:41.859972   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:42.219076   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:42.362624   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:42.770639   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:42.862835   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:43.220431   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:43.369797   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:43.718572   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:43.861702   10524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0116 01:43:44.226434   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:44.367438   10524 kapi.go:107] duration metric: took 2m32.0173449s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0116 01:43:44.717315   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:45.211309   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:45.717077   10524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0116 01:43:46.224750   10524 kapi.go:107] duration metric: took 2m28.5192205s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0116 01:43:46.225271   10524 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-179200 cluster.
	I0116 01:43:46.226321   10524 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0116 01:43:46.227087   10524 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0116 01:43:46.228039   10524 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, metrics-server, storage-provisioner, yakd, helm-tiller, inspektor-gadget, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0116 01:43:46.228670   10524 addons.go:505] enable addons completed in 3m6.8494009s: enabled=[nvidia-device-plugin ingress-dns cloud-spanner metrics-server storage-provisioner yakd helm-tiller inspektor-gadget default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0116 01:43:46.228803   10524 start.go:233] waiting for cluster config update ...
	I0116 01:43:46.228803   10524 start.go:242] writing updated cluster config ...
	I0116 01:43:46.245017   10524 ssh_runner.go:195] Run: rm -f paused
	I0116 01:43:46.499562   10524 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 01:43:46.500563   10524 out.go:177] * Done! kubectl is now configured to use "addons-179200" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-01-16 01:38:35 UTC, ends at Tue 2024-01-16 01:44:41 UTC. --
	Jan 16 01:44:27 addons-179200 dockerd[1323]: time="2024-01-16T01:44:27.688214766Z" level=warning msg="cleaning up after shim disconnected" id=010c667cdb4b2918e2e8dab8e22705f9962bf0630ca4df386060e09c8b289a26 namespace=moby
	Jan 16 01:44:27 addons-179200 dockerd[1323]: time="2024-01-16T01:44:27.688274166Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 16 01:44:28 addons-179200 dockerd[1323]: time="2024-01-16T01:44:28.161076353Z" level=info msg="shim disconnected" id=2f13695eb2902891e31fe00561ed41fd5c76a3e21c9a5a20203636d001d60c0b namespace=moby
	Jan 16 01:44:28 addons-179200 dockerd[1323]: time="2024-01-16T01:44:28.161230554Z" level=warning msg="cleaning up after shim disconnected" id=2f13695eb2902891e31fe00561ed41fd5c76a3e21c9a5a20203636d001d60c0b namespace=moby
	Jan 16 01:44:28 addons-179200 dockerd[1323]: time="2024-01-16T01:44:28.161250754Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 16 01:44:28 addons-179200 dockerd[1317]: time="2024-01-16T01:44:28.163155467Z" level=info msg="ignoring event" container=2f13695eb2902891e31fe00561ed41fd5c76a3e21c9a5a20203636d001d60c0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 16 01:44:28 addons-179200 dockerd[1317]: time="2024-01-16T01:44:28.233913641Z" level=warning msg="failed to close stdin: task 2f13695eb2902891e31fe00561ed41fd5c76a3e21c9a5a20203636d001d60c0b not found: not found"
	Jan 16 01:44:29 addons-179200 dockerd[1317]: time="2024-01-16T01:44:29.571704178Z" level=info msg="ignoring event" container=cb97b8073ae718bdd4362e56bef4e48d46316af053535be37e4d9fae5b9e6939 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 16 01:44:29 addons-179200 dockerd[1323]: time="2024-01-16T01:44:29.573948293Z" level=info msg="shim disconnected" id=cb97b8073ae718bdd4362e56bef4e48d46316af053535be37e4d9fae5b9e6939 namespace=moby
	Jan 16 01:44:29 addons-179200 dockerd[1323]: time="2024-01-16T01:44:29.574771298Z" level=warning msg="cleaning up after shim disconnected" id=cb97b8073ae718bdd4362e56bef4e48d46316af053535be37e4d9fae5b9e6939 namespace=moby
	Jan 16 01:44:29 addons-179200 dockerd[1323]: time="2024-01-16T01:44:29.574817899Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 16 01:44:29 addons-179200 dockerd[1323]: time="2024-01-16T01:44:29.601872479Z" level=warning msg="cleanup warnings time=\"2024-01-16T01:44:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jan 16 01:44:34 addons-179200 dockerd[1317]: time="2024-01-16T01:44:34.386223770Z" level=info msg="ignoring event" container=25e558ceb30e912f655bc833d6fc1244d6c458bdb6a1c69951990244b61fb8a2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 16 01:44:34 addons-179200 dockerd[1323]: time="2024-01-16T01:44:34.390019094Z" level=info msg="shim disconnected" id=25e558ceb30e912f655bc833d6fc1244d6c458bdb6a1c69951990244b61fb8a2 namespace=moby
	Jan 16 01:44:34 addons-179200 dockerd[1323]: time="2024-01-16T01:44:34.390752899Z" level=warning msg="cleaning up after shim disconnected" id=25e558ceb30e912f655bc833d6fc1244d6c458bdb6a1c69951990244b61fb8a2 namespace=moby
	Jan 16 01:44:34 addons-179200 dockerd[1323]: time="2024-01-16T01:44:34.390864999Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 16 01:44:34 addons-179200 dockerd[1323]: time="2024-01-16T01:44:34.544630582Z" level=info msg="shim disconnected" id=2907441f83b36bafc77f6ac2a78160daab8eb380a1297d066e1ba110214b8927 namespace=moby
	Jan 16 01:44:34 addons-179200 dockerd[1323]: time="2024-01-16T01:44:34.544764383Z" level=warning msg="cleaning up after shim disconnected" id=2907441f83b36bafc77f6ac2a78160daab8eb380a1297d066e1ba110214b8927 namespace=moby
	Jan 16 01:44:34 addons-179200 dockerd[1323]: time="2024-01-16T01:44:34.544777683Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 16 01:44:34 addons-179200 dockerd[1317]: time="2024-01-16T01:44:34.547101398Z" level=info msg="ignoring event" container=2907441f83b36bafc77f6ac2a78160daab8eb380a1297d066e1ba110214b8927 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 16 01:44:40 addons-179200 cri-dockerd[1210]: time="2024-01-16T01:44:40Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931"
	Jan 16 01:44:40 addons-179200 dockerd[1323]: time="2024-01-16T01:44:40.881454716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 01:44:40 addons-179200 dockerd[1323]: time="2024-01-16T01:44:40.881550617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 01:44:40 addons-179200 dockerd[1323]: time="2024-01-16T01:44:40.882499922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 01:44:40 addons-179200 dockerd[1323]: time="2024-01-16T01:44:40.882629623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	6040f05f2b54f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931                            2 seconds ago        Running             gadget                                   4                   527ec5065afed       gadget-c96rx
	2f13695eb2902       alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                                          15 seconds ago       Exited              helm-test                                0                   cb97b8073ae71       helm-test
	bfe972940889a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c77d8eb2b3dc6e9d60767f824b296e42d6d4fdc2f17f507492a2c981933db931                            50 seconds ago       Exited              gadget                                   3                   527ec5065afed       gadget-c96rx
	833e97c02bed8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                                 57 seconds ago       Running             gcp-auth                                 0                   156bf40c563f6       gcp-auth-d4c87556c-zqln4
	4424c85329f23       registry.k8s.io/ingress-nginx/controller@sha256:b3aba22b1da80e7acfc52b115cae1d4c687172cbf2b742d5b502419c25ff340e                             About a minute ago   Running             controller                               0                   e10ed44462bdd       ingress-nginx-controller-69cff4fd79-jb8zv
	10b26bc10294e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   55f441f8c5096       csi-hostpathplugin-smprq
	f3dc984b991d5       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   55f441f8c5096       csi-hostpathplugin-smprq
	5774b6ce6e4d8       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   55f441f8c5096       csi-hostpathplugin-smprq
	3dbb29bf16b7c       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   55f441f8c5096       csi-hostpathplugin-smprq
	c571331614e44       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   55f441f8c5096       csi-hostpathplugin-smprq
	292bda3df59a8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80                   About a minute ago   Exited              patch                                    0                   024b609161fc0       ingress-nginx-admission-patch-mxnkl
	20c8f7c44cfc5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80                   About a minute ago   Exited              create                                   0                   1592cdf3756b4       ingress-nginx-admission-create-pj6hl
	872e553a1855f       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       About a minute ago   Running             local-path-provisioner                   0                   f014300aff766       local-path-provisioner-78b46b4d5c-7l998
	8cd27e0432eba       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  About a minute ago   Running             tiller                                   0                   95c309f95f47b       tiller-deploy-7b677967b9-nbhrz
	62825663d459a       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   d3e0a0580c4f3       snapshot-controller-58dbcc7b99-vcdv7
	3a0cbfb8533a7       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   0f8cc250754ce       snapshot-controller-58dbcc7b99-kvhcx
	05b66b80d4469       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              2 minutes ago        Running             csi-resizer                              0                   b528042fa6e12       csi-hostpath-resizer-0
	fbf7a363664a0       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   043d0fe93b5d7       csi-hostpath-attacher-0
	6235a6d15dc24       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   55f441f8c5096       csi-hostpathplugin-smprq
	2cfe88b27b10f       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   4894554a0a4f7       yakd-dashboard-9947fc6bf-jpxm4
	e15a5546060b9       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             2 minutes ago        Running             minikube-ingress-dns                     0                   4557aac9f9534       kube-ingress-dns-minikube
	03ee51d860372       gcr.io/cloud-spanner-emulator/emulator@sha256:5d905e581977bd3d543742e74ddb75c0ba65517cf19742089ae1be45b7b8aa49                               3 minutes ago        Running             cloud-spanner-emulator                   0                   22c36321c83d1       cloud-spanner-emulator-64c8c85f65-7ngts
	de3874681a2c0       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   dc935fb6329c8       storage-provisioner
	eb674b7d5ca2c       ead0a4a53df89                                                                                                                                3 minutes ago        Running             coredns                                  0                   51a494a77d739       coredns-5dd5756b68-n4w25
	1c6a3b1ceb7e6       83f6cc407eed8                                                                                                                                3 minutes ago        Running             kube-proxy                               0                   caea0a503ce46       kube-proxy-j7sl4
	464c056cefaf9       e3db313c6dbc0                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   9e2e65fc6f04a       kube-scheduler-addons-179200
	85e3467b5553b       73deb9a3f7025                                                                                                                                4 minutes ago        Running             etcd                                     0                   a9033c03f29d1       etcd-addons-179200
	3f4c26443f54e       7fe0e6f37db33                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   07bd90bc5d7cf       kube-apiserver-addons-179200
	616b7a934b050       d058aa5ab969c                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   381691e92a105       kube-controller-manager-addons-179200
	
	
	==> controller_ingress [4424c85329f2] <==
	W0116 01:43:43.713917       7 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0116 01:43:43.714095       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0116 01:43:43.728494       7 main.go:249] "Running in Kubernetes cluster" major="1" minor="28" git="v1.28.4" state="clean" commit="bae2c62678db2b5053817bc97181fcc2e8388103" platform="linux/amd64"
	I0116 01:43:43.897292       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0116 01:43:43.934829       7 ssl.go:536] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0116 01:43:43.965478       7 nginx.go:260] "Starting NGINX Ingress controller"
	I0116 01:43:43.993052       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"47233535-021d-4712-86fb-f9c0ee03c303", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0116 01:43:43.994802       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"64654d09-60d1-43f2-948c-b5f2a5e4c355", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0116 01:43:43.994964       7 event.go:298] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"b6a159cd-2c0c-4493-ab91-d902faf5b4a8", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0116 01:43:45.167328       7 nginx.go:303] "Starting NGINX process"
	I0116 01:43:45.167723       7 leaderelection.go:245] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0116 01:43:45.170158       7 nginx.go:323] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0116 01:43:45.170403       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0116 01:43:45.181386       7 leaderelection.go:255] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0116 01:43:45.181781       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-69cff4fd79-jb8zv"
	I0116 01:43:45.187654       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-69cff4fd79-jb8zv" node="addons-179200"
	I0116 01:43:45.318976       7 controller.go:210] "Backend successfully reloaded"
	I0116 01:43:45.319131       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0116 01:43:45.320052       7 event.go:298] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-69cff4fd79-jb8zv", UID:"caa106a6-c709-4f65-8d9d-7c7716e2b0a8", APIVersion:"v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         f503c4bb5fa7d857ad29e94970eb550c2bc00b7c
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.21.6
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [eb674b7d5ca2] <==
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = e76cd1f4241fbd336d5e1d56170ae69e8389ff4197cb4bacea4ab86ce4c2ec8f58098e2106677580c06728ae57d9f0250db8f5c40e7a5cff291fc37d7d4dfe8b
	[INFO] Reloading complete
	[INFO] 10.244.0.14:52181 - 24616 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000502301s
	[INFO] 10.244.0.14:52181 - 18733 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000606502s
	[INFO] 10.244.0.14:43196 - 12273 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000105401s
	[INFO] 10.244.0.14:43196 - 29324 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000228s
	[INFO] 10.244.0.14:51440 - 48255 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000753s
	[INFO] 10.244.0.14:51440 - 37234 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001032s
	[INFO] 10.244.0.14:45877 - 7116 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000991s
	[INFO] 10.244.0.14:45877 - 60882 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000179501s
	[INFO] 10.244.0.14:48664 - 15241 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0001515s
	[INFO] 10.244.0.14:35253 - 16108 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000945s
	[INFO] 10.244.0.14:59816 - 30431 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000511s
	[INFO] 10.244.0.14:57880 - 15777 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001811s
	[INFO] 10.244.0.22:41570 - 2479 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000215302s
	[INFO] 10.244.0.22:52606 - 49845 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000299603s
	[INFO] 10.244.0.22:33176 - 64414 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125601s
	[INFO] 10.244.0.22:60809 - 40817 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100201s
	[INFO] 10.244.0.22:52344 - 24269 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000231802s
	[INFO] 10.244.0.22:50089 - 46200 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000202802s
	[INFO] 10.244.0.22:55096 - 19946 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 230 0.00198912s
	[INFO] 10.244.0.22:53942 - 50387 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.001703917s
	[INFO] 10.244.0.25:41424 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000157901s
	[INFO] 10.244.0.25:40544 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101401s
	
	
	==> describe nodes <==
	Name:               addons-179200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-179200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=addons-179200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T01_40_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-179200
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-179200"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 01:40:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-179200
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 01:44:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 01:44:34 +0000   Tue, 16 Jan 2024 01:40:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 01:44:34 +0000   Tue, 16 Jan 2024 01:40:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 01:44:34 +0000   Tue, 16 Jan 2024 01:40:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 01:44:34 +0000   Tue, 16 Jan 2024 01:40:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.117.123
	  Hostname:    addons-179200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914588Ki
	  pods:               110
	System Info:
	  Machine ID:                 e693c4737c694a3291c2fc361da882ca
	  System UUID:                8ca2b590-bb71-d14e-b7da-d8b24ee8c50e
	  Boot ID:                    53637d84-8a58-4463-893d-fbe2b8d299b7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-64c8c85f65-7ngts      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  gadget                      gadget-c96rx                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  gcp-auth                    gcp-auth-d4c87556c-zqln4                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  ingress-nginx               ingress-nginx-controller-69cff4fd79-jb8zv    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m30s
	  kube-system                 coredns-5dd5756b68-n4w25                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m2s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 csi-hostpathplugin-smprq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 etcd-addons-179200                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-apiserver-addons-179200                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-addons-179200        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-proxy-j7sl4                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-addons-179200                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 snapshot-controller-58dbcc7b99-kvhcx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 snapshot-controller-58dbcc7b99-vcdv7         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 tiller-deploy-7b677967b9-nbhrz               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  local-path-storage          local-path-provisioner-78b46b4d5c-7l998      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-jpxm4               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m49s  kube-proxy       
	  Normal  Starting                 4m15s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m15s  kubelet          Node addons-179200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s  kubelet          Node addons-179200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s  kubelet          Node addons-179200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m14s  kubelet          Node addons-179200 status is now: NodeReady
	  Normal  RegisteredNode           4m4s   node-controller  Node addons-179200 event: Registered Node addons-179200 in Controller
	
	
	==> dmesg <==
	[  +0.210011] systemd-fstab-generator[1005]: Ignoring "noauto" for root device
	[  +1.338885] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.403656] systemd-fstab-generator[1165]: Ignoring "noauto" for root device
	[  +0.155202] systemd-fstab-generator[1176]: Ignoring "noauto" for root device
	[  +0.164228] systemd-fstab-generator[1187]: Ignoring "noauto" for root device
	[  +0.246702] systemd-fstab-generator[1202]: Ignoring "noauto" for root device
	[Jan16 01:40] systemd-fstab-generator[1308]: Ignoring "noauto" for root device
	[  +5.611229] kauditd_printk_skb: 29 callbacks suppressed
	[  +4.775606] systemd-fstab-generator[1671]: Ignoring "noauto" for root device
	[  +0.759917] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.590534] systemd-fstab-generator[2622]: Ignoring "noauto" for root device
	[ +31.115831] kauditd_printk_skb: 24 callbacks suppressed
	[Jan16 01:41] kauditd_printk_skb: 14 callbacks suppressed
	[ +10.416336] kauditd_printk_skb: 13 callbacks suppressed
	[ +19.832659] kauditd_printk_skb: 63 callbacks suppressed
	[Jan16 01:42] kauditd_printk_skb: 18 callbacks suppressed
	[Jan16 01:43] kauditd_printk_skb: 34 callbacks suppressed
	[ +18.091706] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.413198] kauditd_printk_skb: 3 callbacks suppressed
	[ +15.356281] kauditd_printk_skb: 1 callbacks suppressed
	[  +6.071574] kauditd_printk_skb: 27 callbacks suppressed
	[Jan16 01:44] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.612550] kauditd_printk_skb: 1 callbacks suppressed
	[  +9.079152] kauditd_printk_skb: 3 callbacks suppressed
	[  +7.593060] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [85e3467b5553] <==
	{"level":"warn","ts":"2024-01-16T01:42:31.068171Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.473787ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13488"}
	{"level":"info","ts":"2024-01-16T01:42:31.068212Z","caller":"traceutil/trace.go:171","msg":"trace[1098438079] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:975; }","duration":"214.04179ms","start":"2024-01-16T01:42:30.854161Z","end":"2024-01-16T01:42:31.068203Z","steps":["trace[1098438079] 'range keys from in-memory index tree'  (duration: 213.343387ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T01:42:39.369774Z","caller":"traceutil/trace.go:171","msg":"trace[187595817] linearizableReadLoop","detail":"{readStateIndex:1044; appliedIndex:1043; }","duration":"159.686187ms","start":"2024-01-16T01:42:39.21007Z","end":"2024-01-16T01:42:39.369757Z","steps":["trace[187595817] 'read index received'  (duration: 159.445687ms)","trace[187595817] 'applied index is now lower than readState.Index'  (duration: 239.8µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T01:42:39.369981Z","caller":"traceutil/trace.go:171","msg":"trace[544336431] transaction","detail":"{read_only:false; response_revision:997; number_of_response:1; }","duration":"187.777191ms","start":"2024-01-16T01:42:39.182196Z","end":"2024-01-16T01:42:39.369973Z","steps":["trace[544336431] 'process raft request'  (duration: 187.37969ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T01:42:39.370161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.087789ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10575"}
	{"level":"info","ts":"2024-01-16T01:42:39.370187Z","caller":"traceutil/trace.go:171","msg":"trace[497502207] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:997; }","duration":"160.129489ms","start":"2024-01-16T01:42:39.21005Z","end":"2024-01-16T01:42:39.370179Z","steps":["trace[497502207] 'agreement among raft nodes before linearized reading'  (duration: 160.048789ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T01:42:39.370471Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.584732ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82159"}
	{"level":"info","ts":"2024-01-16T01:42:39.370498Z","caller":"traceutil/trace.go:171","msg":"trace[1158034369] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:997; }","duration":"144.615032ms","start":"2024-01-16T01:42:39.225875Z","end":"2024-01-16T01:42:39.37049Z","steps":["trace[1158034369] 'agreement among raft nodes before linearized reading'  (duration: 144.407631ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T01:42:55.960714Z","caller":"traceutil/trace.go:171","msg":"trace[716774844] linearizableReadLoop","detail":"{readStateIndex:1096; appliedIndex:1095; }","duration":"364.758623ms","start":"2024-01-16T01:42:55.595884Z","end":"2024-01-16T01:42:55.960643Z","steps":["trace[716774844] 'read index received'  (duration: 364.578422ms)","trace[716774844] 'applied index is now lower than readState.Index'  (duration: 179.701µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T01:42:55.962187Z","caller":"traceutil/trace.go:171","msg":"trace[840482698] transaction","detail":"{read_only:false; response_revision:1045; number_of_response:1; }","duration":"440.546656ms","start":"2024-01-16T01:42:55.52028Z","end":"2024-01-16T01:42:55.960827Z","steps":["trace[840482698] 'process raft request'  (duration: 440.116055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T01:42:55.962513Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T01:42:55.520268Z","time spent":"441.985261ms","remote":"127.0.0.1:60566","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1042 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-01-16T01:42:55.962891Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"367.02553ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-01-16T01:42:55.962994Z","caller":"traceutil/trace.go:171","msg":"trace[834616974] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1045; }","duration":"367.12943ms","start":"2024-01-16T01:42:55.595857Z","end":"2024-01-16T01:42:55.962986Z","steps":["trace[834616974] 'agreement among raft nodes before linearized reading'  (duration: 366.99683ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T01:42:55.963071Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T01:42:55.595843Z","time spent":"367.17073ms","remote":"127.0.0.1:60588","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":522,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"warn","ts":"2024-01-16T01:42:55.963367Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.756941ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10575"}
	{"level":"info","ts":"2024-01-16T01:42:55.963394Z","caller":"traceutil/trace.go:171","msg":"trace[921960155] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1045; }","duration":"240.785641ms","start":"2024-01-16T01:42:55.722602Z","end":"2024-01-16T01:42:55.963387Z","steps":["trace[921960155] 'agreement among raft nodes before linearized reading'  (duration: 240.721441ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T01:42:55.964371Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.768444ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82460"}
	{"level":"info","ts":"2024-01-16T01:42:55.964419Z","caller":"traceutil/trace.go:171","msg":"trace[946136100] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1045; }","duration":"241.817145ms","start":"2024-01-16T01:42:55.722595Z","end":"2024-01-16T01:42:55.964412Z","steps":["trace[946136100] 'agreement among raft nodes before linearized reading'  (duration: 241.692344ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T01:43:15.564149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.728807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.27.117.123\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-01-16T01:43:15.564213Z","caller":"traceutil/trace.go:171","msg":"trace[645096869] range","detail":"{range_begin:/registry/masterleases/172.27.117.123; range_end:; response_count:1; response_revision:1153; }","duration":"115.806007ms","start":"2024-01-16T01:43:15.448393Z","end":"2024-01-16T01:43:15.564199Z","steps":["trace[645096869] 'range keys from in-memory index tree'  (duration: 115.520507ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T01:43:34.466157Z","caller":"traceutil/trace.go:171","msg":"trace[1130694481] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"240.972332ms","start":"2024-01-16T01:43:34.225097Z","end":"2024-01-16T01:43:34.466069Z","steps":["trace[1130694481] 'process raft request'  (duration: 240.83033ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T01:43:34.466649Z","caller":"traceutil/trace.go:171","msg":"trace[432044849] linearizableReadLoop","detail":"{readStateIndex:1257; appliedIndex:1257; }","duration":"111.665966ms","start":"2024-01-16T01:43:34.354973Z","end":"2024-01-16T01:43:34.466639Z","steps":["trace[432044849] 'read index received'  (duration: 111.661966ms)","trace[432044849] 'applied index is now lower than readState.Index'  (duration: 3µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T01:43:34.468045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.076482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13906"}
	{"level":"info","ts":"2024-01-16T01:43:34.468075Z","caller":"traceutil/trace.go:171","msg":"trace[1530354424] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1199; }","duration":"113.121882ms","start":"2024-01-16T01:43:34.354945Z","end":"2024-01-16T01:43:34.468067Z","steps":["trace[1530354424] 'agreement among raft nodes before linearized reading'  (duration: 111.785767ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T01:43:42.768213Z","caller":"traceutil/trace.go:171","msg":"trace[234298787] transaction","detail":"{read_only:false; response_revision:1221; number_of_response:1; }","duration":"159.645056ms","start":"2024-01-16T01:43:42.608534Z","end":"2024-01-16T01:43:42.768179Z","steps":["trace[234298787] 'process raft request'  (duration: 156.513724ms)"],"step_count":1}
	
	
	==> gcp-auth [833e97c02bed] <==
	2024/01/16 01:43:45 GCP Auth Webhook started!
	2024/01/16 01:43:47 Ready to marshal response ...
	2024/01/16 01:43:47 Ready to write response ...
	2024/01/16 01:43:47 Ready to marshal response ...
	2024/01/16 01:43:47 Ready to write response ...
	2024/01/16 01:43:57 Ready to marshal response ...
	2024/01/16 01:43:57 Ready to write response ...
	2024/01/16 01:44:03 Ready to marshal response ...
	2024/01/16 01:44:03 Ready to write response ...
	2024/01/16 01:44:09 Ready to marshal response ...
	2024/01/16 01:44:09 Ready to write response ...
	2024/01/16 01:44:22 Ready to marshal response ...
	2024/01/16 01:44:22 Ready to write response ...
	
	
	==> kernel <==
	 01:44:42 up 6 min,  0 users,  load average: 3.24, 2.75, 1.30
	Linux addons-179200 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [3f4c26443f54] <==
	I0116 01:41:13.883786       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0116 01:41:14.111157       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.108.113.169"}
	W0116 01:41:15.407829       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 01:41:17.471873       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.110.238.77"}
	I0116 01:41:23.502169       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 01:41:30.424620       1 trace.go:236] Trace[207309073]: "List" accept:application/json, */*,audit-id:95c1bc59-deb8-4c0a-8e38-95815a938733,client:172.27.112.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/kube-system/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (16-Jan-2024 01:41:29.855) (total time: 569ms):
	Trace[207309073]: ["List(recursive=true) etcd3" audit-id:95c1bc59-deb8-4c0a-8e38-95815a938733,key:/pods/kube-system,resourceVersion:,resourceVersionMatch:,limit:0,continue: 569ms (01:41:29.855)]
	Trace[207309073]: [569.390874ms] [569.390874ms] END
	I0116 01:41:30.431799       1 trace.go:236] Trace[696225797]: "List" accept:application/json, */*,audit-id:f9cd8f8c-19dc-4e92-a079-c58043eb39ac,client:172.27.112.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (16-Jan-2024 01:41:29.855) (total time: 576ms):
	Trace[696225797]: ["List(recursive=true) etcd3" audit-id:f9cd8f8c-19dc-4e92-a079-c58043eb39ac,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 576ms (01:41:29.855)]
	Trace[696225797]: [576.528985ms] [576.528985ms] END
	E0116 01:42:08.345034       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.195.204:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.195.204:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.195.204:443: connect: connection refused
	W0116 01:42:08.345429       1 handler_proxy.go:93] no RequestInfo found in the context
	E0116 01:42:08.345492       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0116 01:42:08.346332       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0116 01:42:08.346760       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.195.204:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.195.204:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.195.204:443: connect: connection refused
	E0116 01:42:08.350981       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.195.204:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.195.204:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.195.204:443: connect: connection refused
	E0116 01:42:08.372960       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.195.204:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.195.204:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.195.204:443: connect: connection refused
	I0116 01:42:08.520634       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 01:42:23.506920       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 01:43:23.508371       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0116 01:44:08.853596       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0116 01:44:25.841364       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [616b7a934b05] <==
	I0116 01:43:07.018555       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0116 01:43:07.019049       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0116 01:43:07.100563       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0116 01:43:07.108265       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0116 01:43:07.116836       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0116 01:43:07.117185       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0116 01:43:07.665642       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0116 01:43:37.026356       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0116 01:43:37.033187       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0116 01:43:37.107186       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0116 01:43:37.110344       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0116 01:43:44.054193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="218.702µs"
	I0116 01:43:46.124960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="10.356903ms"
	I0116 01:43:46.126653       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="25.401µs"
	I0116 01:43:46.946949       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0116 01:43:46.968817       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 01:43:47.357194       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 01:43:53.582548       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 01:43:58.593449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="25.594025ms"
	I0116 01:43:58.593528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="48.1µs"
	I0116 01:44:02.898637       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 01:44:08.300259       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="6.6µs"
	I0116 01:44:19.876419       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="8.8µs"
	I0116 01:44:29.027167       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0116 01:44:38.583788       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [1c6a3b1ceb7e] <==
	I0116 01:40:52.230617       1 server_others.go:69] "Using iptables proxy"
	I0116 01:40:52.383856       1 node.go:141] Successfully retrieved node IP: 172.27.117.123
	I0116 01:40:52.699739       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 01:40:52.699806       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 01:40:52.748517       1 server_others.go:152] "Using iptables Proxier"
	I0116 01:40:52.748724       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 01:40:52.749025       1 server.go:846] "Version info" version="v1.28.4"
	I0116 01:40:52.749082       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 01:40:52.759273       1 config.go:188] "Starting service config controller"
	I0116 01:40:52.759325       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 01:40:52.759374       1 config.go:97] "Starting endpoint slice config controller"
	I0116 01:40:52.759385       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 01:40:52.760376       1 config.go:315] "Starting node config controller"
	I0116 01:40:52.760390       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 01:40:52.861256       1 shared_informer.go:318] Caches are synced for node config
	I0116 01:40:52.861902       1 shared_informer.go:318] Caches are synced for service config
	I0116 01:40:52.862234       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [464c056cefaf] <==
	W0116 01:40:23.683480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 01:40:23.684981       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 01:40:23.683523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 01:40:23.685143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 01:40:23.683845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 01:40:23.685465       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 01:40:23.688204       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 01:40:23.689105       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 01:40:24.580857       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 01:40:24.581137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 01:40:24.669859       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 01:40:24.670333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 01:40:24.689184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 01:40:24.689307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 01:40:24.873108       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 01:40:24.873160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 01:40:24.925368       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 01:40:24.925648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 01:40:24.930743       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 01:40:24.931122       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 01:40:24.983849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 01:40:24.984404       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 01:40:25.197786       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 01:40:25.198143       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0116 01:40:28.070510       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 01:38:35 UTC, ends at Tue 2024-01-16 01:44:43 UTC. --
	Jan 16 01:44:28 addons-179200 kubelet[2648]: I0116 01:44:28.068812    2648 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/cbfa51db-b463-4111-aead-b04684bb72b3-gcp-creds\") on node \"addons-179200\" DevicePath \"\""
	Jan 16 01:44:28 addons-179200 kubelet[2648]: I0116 01:44:28.077816    2648 operation_generator.go:996] UnmountDevice succeeded for volume "pvc-32929aef-a75b-4ef5-ad3f-88ca531e136a" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^ba101102-b410-11ee-a2ab-122fd8917b93") on node "addons-179200"
	Jan 16 01:44:28 addons-179200 kubelet[2648]: I0116 01:44:28.169601    2648 reconciler_common.go:300] "Volume detached for volume \"pvc-32929aef-a75b-4ef5-ad3f-88ca531e136a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^ba101102-b410-11ee-a2ab-122fd8917b93\") on node \"addons-179200\" DevicePath \"\""
	Jan 16 01:44:28 addons-179200 kubelet[2648]: I0116 01:44:28.311542    2648 scope.go:117] "RemoveContainer" containerID="eb4d9c3b4fe87713060cd50f96ef44576df9991f3b47967c26c71ad62b03179e"
	Jan 16 01:44:28 addons-179200 kubelet[2648]: I0116 01:44:28.381880    2648 scope.go:117] "RemoveContainer" containerID="eb4d9c3b4fe87713060cd50f96ef44576df9991f3b47967c26c71ad62b03179e"
	Jan 16 01:44:28 addons-179200 kubelet[2648]: E0116 01:44:28.383923    2648 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: eb4d9c3b4fe87713060cd50f96ef44576df9991f3b47967c26c71ad62b03179e" containerID="eb4d9c3b4fe87713060cd50f96ef44576df9991f3b47967c26c71ad62b03179e"
	Jan 16 01:44:28 addons-179200 kubelet[2648]: I0116 01:44:28.383997    2648 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"eb4d9c3b4fe87713060cd50f96ef44576df9991f3b47967c26c71ad62b03179e"} err="failed to get container status \"eb4d9c3b4fe87713060cd50f96ef44576df9991f3b47967c26c71ad62b03179e\": rpc error: code = Unknown desc = Error response from daemon: No such container: eb4d9c3b4fe87713060cd50f96ef44576df9991f3b47967c26c71ad62b03179e"
	Jan 16 01:44:29 addons-179200 kubelet[2648]: I0116 01:44:29.566814    2648 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cbfa51db-b463-4111-aead-b04684bb72b3" path="/var/lib/kubelet/pods/cbfa51db-b463-4111-aead-b04684bb72b3/volumes"
	Jan 16 01:44:29 addons-179200 kubelet[2648]: I0116 01:44:29.792961    2648 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hdk6\" (UniqueName: \"kubernetes.io/projected/af5be0f9-8a2f-4ac3-a743-f250dc378e3b-kube-api-access-7hdk6\") pod \"af5be0f9-8a2f-4ac3-a743-f250dc378e3b\" (UID: \"af5be0f9-8a2f-4ac3-a743-f250dc378e3b\") "
	Jan 16 01:44:29 addons-179200 kubelet[2648]: I0116 01:44:29.797306    2648 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af5be0f9-8a2f-4ac3-a743-f250dc378e3b-kube-api-access-7hdk6" (OuterVolumeSpecName: "kube-api-access-7hdk6") pod "af5be0f9-8a2f-4ac3-a743-f250dc378e3b" (UID: "af5be0f9-8a2f-4ac3-a743-f250dc378e3b"). InnerVolumeSpecName "kube-api-access-7hdk6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 01:44:29 addons-179200 kubelet[2648]: I0116 01:44:29.894225    2648 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7hdk6\" (UniqueName: \"kubernetes.io/projected/af5be0f9-8a2f-4ac3-a743-f250dc378e3b-kube-api-access-7hdk6\") on node \"addons-179200\" DevicePath \"\""
	Jan 16 01:44:30 addons-179200 kubelet[2648]: I0116 01:44:30.405430    2648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb97b8073ae718bdd4362e56bef4e48d46316af053535be37e4d9fae5b9e6939"
	Jan 16 01:44:31 addons-179200 kubelet[2648]: I0116 01:44:31.563239    2648 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="af5be0f9-8a2f-4ac3-a743-f250dc378e3b" path="/var/lib/kubelet/pods/af5be0f9-8a2f-4ac3-a743-f250dc378e3b/volumes"
	Jan 16 01:44:34 addons-179200 kubelet[2648]: I0116 01:44:34.741794    2648 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhkvh\" (UniqueName: \"kubernetes.io/projected/4444260f-9552-477a-8f00-7b18a379d51e-kube-api-access-rhkvh\") pod \"4444260f-9552-477a-8f00-7b18a379d51e\" (UID: \"4444260f-9552-477a-8f00-7b18a379d51e\") "
	Jan 16 01:44:34 addons-179200 kubelet[2648]: I0116 01:44:34.742380    2648 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/4444260f-9552-477a-8f00-7b18a379d51e-device-plugin\") pod \"4444260f-9552-477a-8f00-7b18a379d51e\" (UID: \"4444260f-9552-477a-8f00-7b18a379d51e\") "
	Jan 16 01:44:34 addons-179200 kubelet[2648]: I0116 01:44:34.742586    2648 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4444260f-9552-477a-8f00-7b18a379d51e-device-plugin" (OuterVolumeSpecName: "device-plugin") pod "4444260f-9552-477a-8f00-7b18a379d51e" (UID: "4444260f-9552-477a-8f00-7b18a379d51e"). InnerVolumeSpecName "device-plugin". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 16 01:44:34 addons-179200 kubelet[2648]: I0116 01:44:34.745871    2648 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4444260f-9552-477a-8f00-7b18a379d51e-kube-api-access-rhkvh" (OuterVolumeSpecName: "kube-api-access-rhkvh") pod "4444260f-9552-477a-8f00-7b18a379d51e" (UID: "4444260f-9552-477a-8f00-7b18a379d51e"). InnerVolumeSpecName "kube-api-access-rhkvh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 16 01:44:34 addons-179200 kubelet[2648]: I0116 01:44:34.843297    2648 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rhkvh\" (UniqueName: \"kubernetes.io/projected/4444260f-9552-477a-8f00-7b18a379d51e-kube-api-access-rhkvh\") on node \"addons-179200\" DevicePath \"\""
	Jan 16 01:44:34 addons-179200 kubelet[2648]: I0116 01:44:34.843347    2648 reconciler_common.go:300] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/4444260f-9552-477a-8f00-7b18a379d51e-device-plugin\") on node \"addons-179200\" DevicePath \"\""
	Jan 16 01:44:35 addons-179200 kubelet[2648]: I0116 01:44:35.573452    2648 scope.go:117] "RemoveContainer" containerID="25e558ceb30e912f655bc833d6fc1244d6c458bdb6a1c69951990244b61fb8a2"
	Jan 16 01:44:37 addons-179200 kubelet[2648]: I0116 01:44:37.560360    2648 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4444260f-9552-477a-8f00-7b18a379d51e" path="/var/lib/kubelet/pods/4444260f-9552-477a-8f00-7b18a379d51e/volumes"
	Jan 16 01:44:40 addons-179200 kubelet[2648]: I0116 01:44:40.542068    2648 scope.go:117] "RemoveContainer" containerID="bfe972940889a7c03d51e69e34f91cac9babc4787ea4aac2906d2705529a72cc"
	Jan 16 01:44:42 addons-179200 kubelet[2648]: I0116 01:44:42.959536    2648 scope.go:117] "RemoveContainer" containerID="bfe972940889a7c03d51e69e34f91cac9babc4787ea4aac2906d2705529a72cc"
	Jan 16 01:44:42 addons-179200 kubelet[2648]: I0116 01:44:42.960417    2648 scope.go:117] "RemoveContainer" containerID="6040f05f2b54f7121e6c23a440ec1a86d387623bc7472e82d8f72e6b685cd14c"
	Jan 16 01:44:42 addons-179200 kubelet[2648]: E0116 01:44:42.962318    2648 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-c96rx_gadget(2daed79d-465f-4e36-a1dd-2064c909ba34)\"" pod="gadget/gadget-c96rx" podUID="2daed79d-465f-4e36-a1dd-2064c909ba34"
	
	
	==> storage-provisioner [de3874681a2c] <==
	I0116 01:41:19.200180       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 01:41:19.294361       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 01:41:19.294421       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 01:41:19.349162       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 01:41:19.352167       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-179200_d8193cde-1ccb-40a8-b8e5-9ce3921cddd5!
	I0116 01:41:19.381300       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c4d6eb46-94ee-4c00-b832-ef529db45095", APIVersion:"v1", ResourceVersion:"835", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-179200_d8193cde-1ccb-40a8-b8e5-9ce3921cddd5 became leader
	I0116 01:41:19.552737       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-179200_d8193cde-1ccb-40a8-b8e5-9ce3921cddd5!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 01:44:33.456383    9668 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-179200 -n addons-179200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-179200 -n addons-179200: (12.8054799s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-179200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-pj6hl ingress-nginx-admission-patch-mxnkl
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-179200 describe pod ingress-nginx-admission-create-pj6hl ingress-nginx-admission-patch-mxnkl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-179200 describe pod ingress-nginx-admission-create-pj6hl ingress-nginx-admission-patch-mxnkl: exit status 1 (177.3329ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pj6hl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mxnkl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-179200 describe pod ingress-nginx-admission-create-pj6hl ingress-nginx-admission-patch-mxnkl: exit status 1
--- FAIL: TestAddons/parallel/Registry (71.22s)

                                                
                                    
x
+
TestCertExpiration (890.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-279800 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-expiration-279800 --memory=2048 --cert-expiration=3m --driver=hyperv: exit status 90 (7m36.0882169s)

                                                
                                                
-- stdout --
	* [cert-expiration-279800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node cert-expiration-279800 in cluster cert-expiration-279800
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:35:29.211196    5108 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2024-01-16 03:40:40 UTC, ends at Tue 2024-01-16 03:43:05 UTC. --
	Jan 16 03:41:32 cert-expiration-279800 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.218232487Z" level=info msg="Starting up"
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.219275896Z" level=info msg="containerd not running, starting managed containerd"
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.220738008Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=694
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.258459114Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.284137823Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.284321424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.286835545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.286945945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.287294348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.287396149Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.287500550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.287728552Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.287749952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.287841453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.288212856Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.288320457Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.288344357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.288559959Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.288730260Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.288887061Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.289029362Z" level=info msg="metadata content store policy set" policy=shared
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300027752Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300139553Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300163053Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300199953Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300218553Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300231053Z" level=info msg="NRI interface is disabled by configuration."
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300253554Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300379955Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300426855Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300446555Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300462455Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300478655Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300514056Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300531456Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300881959Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.300945859Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.301007760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.301061560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.301111261Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.301256662Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303217678Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303334979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303359479Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303386679Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303445480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303464080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303478080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303497380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303615381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303658981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303675481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303690382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303705982Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303772982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303791282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303805082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303819983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303836383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303852183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303865383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303880583Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303897183Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303908783Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.303920383Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.304257286Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.304463688Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.304682590Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 16 03:41:32 cert-expiration-279800 dockerd[694]: time="2024-01-16T03:41:32.304707290Z" level=info msg="containerd successfully booted in 0.049010s"
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.337177954Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.352498678Z" level=info msg="Loading containers: start."
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.565451908Z" level=info msg="Loading containers: done."
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.581210936Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.581231336Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.581238536Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.581244537Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.581263637Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.581357537Z" level=info msg="Daemon has completed initialization"
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.631753847Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 16 03:41:32 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:41:32.631877248Z" level=info msg="API listen on [::]:2376"
	Jan 16 03:41:32 cert-expiration-279800 systemd[1]: Started Docker Application Container Engine.
	Jan 16 03:42:03 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:42:03.970687124Z" level=info msg="Processing signal 'terminated'"
	Jan 16 03:42:03 cert-expiration-279800 systemd[1]: Stopping Docker Application Container Engine...
	Jan 16 03:42:03 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:42:03.972930624Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 16 03:42:03 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:42:03.973244624Z" level=info msg="Daemon shutdown complete"
	Jan 16 03:42:03 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:42:03.973334024Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 16 03:42:03 cert-expiration-279800 dockerd[688]: time="2024-01-16T03:42:03.973366924Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 16 03:42:04 cert-expiration-279800 systemd[1]: docker.service: Succeeded.
	Jan 16 03:42:04 cert-expiration-279800 systemd[1]: Stopped Docker Application Container Engine.
	Jan 16 03:42:04 cert-expiration-279800 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:42:05 cert-expiration-279800 dockerd[1025]: time="2024-01-16T03:42:05.070492224Z" level=info msg="Starting up"
	Jan 16 03:43:05 cert-expiration-279800 dockerd[1025]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 16 03:43:05 cert-expiration-279800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 16 03:43:05 cert-expiration-279800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 16 03:43:05 cert-expiration-279800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-expiration-279800 --memory=2048 --cert-expiration=3m --driver=hyperv" : exit status 90
E0116 03:43:13.045686   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 03:43:46.628029   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-279800 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-279800 --memory=2048 --cert-expiration=8760h --driver=hyperv: (2m57.8278472s)
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-279800] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node cert-expiration-279800 in cluster cert-expiration-279800
	* Updating the running hyperv "cert-expiration-279800" VM ...
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Done! kubectl is now configured to use "cert-expiration-279800" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:46:05.321763   12076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! This VM is having trouble accessing https://registry.k8s.io
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-01-16 03:49:03.0223808 +0000 UTC m=+7956.955664301
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-279800 -n cert-expiration-279800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-expiration-279800 -n cert-expiration-279800: (12.4834082s)
helpers_test.go:244: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-expiration-279800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p cert-expiration-279800 logs -n 25: (8.3893534s)
helpers_test.go:252: TestCertExpiration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-700700 sudo                                | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | journalctl -xeu kubelet --all                        |                           |                   |         |                     |                     |
	|         | --full --no-pager                                    |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo cat                            | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo cat                            | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo                                | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | systemctl status docker --all                        |                           |                   |         |                     |                     |
	|         | --full --no-pager                                    |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo                                | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | systemctl cat docker                                 |                           |                   |         |                     |                     |
	|         | --no-pager                                           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo cat                            | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo docker                         | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | system info                                          |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo                                | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | systemctl status cri-docker                          |                           |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo                                | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |                   |         |                     |                     |
	|         | --no-pager                                           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo cat                            | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo cat                            | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo                                | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | cri-dockerd --version                                |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo                                | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | systemctl status containerd                          |                           |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo                                | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | systemctl cat containerd                             |                           |                   |         |                     |                     |
	|         | --no-pager                                           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo cat                            | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo cat                            | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo                                | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | containerd config dump                               |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo                                | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | systemctl status crio --all                          |                           |                   |         |                     |                     |
	|         | --full --no-pager                                    |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo                                | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo find                           | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |                   |         |                     |                     |
	| ssh     | -p cilium-700700 sudo crio                           | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | config                                               |                           |                   |         |                     |                     |
	| delete  | -p cilium-700700                                     | cilium-700700             | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC | 16 Jan 24 03:48 UTC |
	| start   | -p force-systemd-env-378000                          | force-systemd-env-378000  | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | --memory=2048                                        |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	| delete  | -p pause-143300                                      | pause-143300              | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC | 16 Jan 24 03:48 UTC |
	| start   | -p kubernetes-upgrade-069600                         | kubernetes-upgrade-069600 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:48 UTC |                     |
	|         | --memory=2200                                        |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:48:45
	Running on machine: minikube3
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:48:45.578901    4960 out.go:296] Setting OutFile to fd 1788 ...
	I0116 03:48:45.578901    4960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:48:45.578901    4960 out.go:309] Setting ErrFile to fd 1776...
	I0116 03:48:45.578901    4960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:48:45.600892    4960 out.go:303] Setting JSON to false
	I0116 03:48:45.605924    4960 start.go:128] hostinfo: {"hostname":"minikube3","uptime":55316,"bootTime":1705321609,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0116 03:48:45.605980    4960 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0116 03:48:45.607320    4960 out.go:177] * [kubernetes-upgrade-069600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0116 03:48:45.607898    4960 notify.go:220] Checking for updates...
	I0116 03:48:45.608695    4960 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:48:45.609337    4960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:48:45.609977    4960 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0116 03:48:45.610692    4960 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:48:45.611221    4960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:48:42.686549    7880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:48:42.686549    7880 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:48:42.686629    7880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-072400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:48:45.427030    7880 main.go:141] libmachine: [stdout =====>] : 172.27.121.210
	
	I0116 03:48:45.427030    7880 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:48:45.427030    7880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-072400 ).state
	I0116 03:48:45.613207    4960 config.go:182] Loaded profile config "cert-expiration-279800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:48:45.613487    4960 config.go:182] Loaded profile config "cert-options-072400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:48:45.614025    4960 config.go:182] Loaded profile config "force-systemd-env-378000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:48:45.614138    4960 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:48:47.712179    7880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:48:47.712179    7880 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:48:47.712467    7880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-072400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:48:50.773676    7880 main.go:141] libmachine: [stdout =====>] : 172.27.121.210
	
	I0116 03:48:50.773676    7880 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:48:50.773676    7880 provision.go:138] copyHostCerts
	I0116 03:48:50.773676    7880 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0116 03:48:50.773676    7880 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0116 03:48:50.774237    7880 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0116 03:48:50.775466    7880 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0116 03:48:50.775466    7880 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0116 03:48:50.776045    7880 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0116 03:48:50.777519    7880 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0116 03:48:50.777519    7880 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0116 03:48:50.777636    7880 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0116 03:48:50.778058    7880 provision.go:112] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cert-options-072400 san=[172.27.121.210 172.27.121.210 localhost 127.0.0.1 minikube cert-options-072400]
	I0116 03:48:51.222889   12076 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 03:48:51.222889   12076 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 03:48:51.222889   12076 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 03:48:51.222889   12076 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 03:48:51.223511   12076 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 03:48:51.223511   12076 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:48:51.223511   12076 out.go:204]   - Generating certificates and keys ...
	I0116 03:48:51.224504   12076 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 03:48:51.224504   12076 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 03:48:51.224504   12076 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 03:48:51.224504   12076 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 03:48:51.224504   12076 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 03:48:51.224504   12076 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 03:48:51.224504   12076 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 03:48:51.225520   12076 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-279800 localhost] and IPs [172.27.112.221 127.0.0.1 ::1]
	I0116 03:48:51.225520   12076 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 03:48:51.225520   12076 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-279800 localhost] and IPs [172.27.112.221 127.0.0.1 ::1]
	I0116 03:48:51.226513   12076 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 03:48:51.226513   12076 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 03:48:51.226513   12076 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 03:48:51.226513   12076 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:48:51.226513   12076 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:48:51.226513   12076 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:48:51.226513   12076 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:48:51.226513   12076 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:48:51.227502   12076 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:48:51.227502   12076 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:48:51.228991   12076 out.go:204]   - Booting up control plane ...
	I0116 03:48:51.228991   12076 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:48:51.229503   12076 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:48:51.229503   12076 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:48:51.229503   12076 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:48:51.229503   12076 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:48:51.229503   12076 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 03:48:51.230501   12076 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 03:48:51.230501   12076 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.508289 seconds
	I0116 03:48:51.230501   12076 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 03:48:51.230501   12076 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 03:48:51.230501   12076 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 03:48:51.231500   12076 kubeadm.go:322] [mark-control-plane] Marking the node cert-expiration-279800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 03:48:51.231500   12076 kubeadm.go:322] [bootstrap-token] Using token: e2c6uq.qovmrm00c4uoj1s2
	I0116 03:48:51.231500   12076 out.go:204]   - Configuring RBAC rules ...
	I0116 03:48:51.232499   12076 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 03:48:51.232499   12076 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 03:48:51.232499   12076 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 03:48:51.232499   12076 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 03:48:51.233512   12076 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 03:48:51.233512   12076 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 03:48:51.233512   12076 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 03:48:51.233512   12076 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 03:48:51.233512   12076 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 03:48:51.233512   12076 kubeadm.go:322] 
	I0116 03:48:51.233512   12076 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 03:48:51.233512   12076 kubeadm.go:322] 
	I0116 03:48:51.234543   12076 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 03:48:51.234543   12076 kubeadm.go:322] 
	I0116 03:48:51.234543   12076 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 03:48:51.234543   12076 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 03:48:51.234543   12076 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 03:48:51.234543   12076 kubeadm.go:322] 
	I0116 03:48:51.234543   12076 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 03:48:51.234543   12076 kubeadm.go:322] 
	I0116 03:48:51.234543   12076 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 03:48:51.234543   12076 kubeadm.go:322] 
	I0116 03:48:51.234543   12076 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 03:48:51.234543   12076 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 03:48:51.235533   12076 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 03:48:51.235533   12076 kubeadm.go:322] 
	I0116 03:48:51.235533   12076 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 03:48:51.235533   12076 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 03:48:51.235533   12076 kubeadm.go:322] 
	I0116 03:48:51.235533   12076 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token e2c6uq.qovmrm00c4uoj1s2 \
	I0116 03:48:51.235533   12076 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 \
	I0116 03:48:51.235533   12076 kubeadm.go:322] 	--control-plane 
	I0116 03:48:51.235533   12076 kubeadm.go:322] 
	I0116 03:48:51.236504   12076 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 03:48:51.236504   12076 kubeadm.go:322] 
	I0116 03:48:51.236504   12076 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token e2c6uq.qovmrm00c4uoj1s2 \
	I0116 03:48:51.236504   12076 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 
	I0116 03:48:51.236504   12076 cni.go:84] Creating CNI manager for ""
	I0116 03:48:51.236504   12076 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0116 03:48:51.237503   12076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0116 03:48:51.251517    4960 out.go:177] * Using the hyperv driver based on user configuration
	I0116 03:48:51.252516    4960 start.go:298] selected driver: hyperv
	I0116 03:48:51.252516    4960 start.go:902] validating driver "hyperv" against <nil>
	I0116 03:48:51.252516    4960 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:48:51.312409    4960 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 03:48:51.314344    4960 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 03:48:51.314344    4960 cni.go:84] Creating CNI manager for ""
	I0116 03:48:51.314344    4960 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0116 03:48:51.314344    4960 start_flags.go:321] config:
	{Name:kubernetes-upgrade-069600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-069600 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:48:51.315348    4960 iso.go:125] acquiring lock: {Name:mk2c0b62d272a573835231fdc54419c800e07e34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:48:51.316370    4960 out.go:177] * Starting control plane node kubernetes-upgrade-069600 in cluster kubernetes-upgrade-069600
	I0116 03:48:51.253520   12076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0116 03:48:51.279357   12076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0116 03:48:51.352946   12076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:48:51.369366   12076 ops.go:34] apiserver oom_adj: -16
	I0116 03:48:51.369366   12076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=cert-expiration-279800 minikube.k8s.io/updated_at=2024_01_16T03_48_51_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:48:51.369366   12076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:48:51.991557   12076 kubeadm.go:1088] duration metric: took 638.6068ms to wait for elevateKubeSystemPrivileges.
	I0116 03:48:51.991557   12076 kubeadm.go:406] StartCluster complete in 16.1792491s
	I0116 03:48:51.991557   12076 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:48:51.991799   12076 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:48:51.993048   12076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:48:51.994407   12076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:48:51.994495   12076 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:48:51.994495   12076 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-279800"
	I0116 03:48:51.994495   12076 addons.go:234] Setting addon storage-provisioner=true in "cert-expiration-279800"
	I0116 03:48:51.994495   12076 host.go:66] Checking if "cert-expiration-279800" exists ...
	I0116 03:48:51.994495   12076 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-279800"
	I0116 03:48:51.994495   12076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-279800"
	I0116 03:48:51.994495   12076 config.go:182] Loaded profile config "cert-expiration-279800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:48:51.995675   12076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-279800 ).state
	I0116 03:48:51.996254   12076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-279800 ).state
	I0116 03:48:52.185146   12076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.112.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 03:48:52.538200   12076 kapi.go:248] "coredns" deployment in "kube-system" namespace and "cert-expiration-279800" context rescaled to 1 replicas
	I0116 03:48:52.538200   12076 start.go:223] Will wait 6m0s for node &{Name: IP:172.27.112.221 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0116 03:48:52.538944   12076 out.go:177] * Verifying Kubernetes components...
	I0116 03:48:52.565390   12076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:48:53.446526   12076 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.112.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.2613721s)
	I0116 03:48:53.446689   12076 start.go:929] {"host.minikube.internal": 172.27.112.1} host record injected into CoreDNS's ConfigMap
	I0116 03:48:53.449260   12076 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:48:53.468059   12076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:48:53.490093   12076 api_server.go:72] duration metric: took 951.8867ms to wait for apiserver process to appear ...
	I0116 03:48:53.490093   12076 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:48:53.490093   12076 api_server.go:253] Checking apiserver healthz at https://172.27.112.221:8443/healthz ...
	I0116 03:48:53.499984   12076 api_server.go:279] https://172.27.112.221:8443/healthz returned 200:
	ok
	I0116 03:48:53.503337   12076 api_server.go:141] control plane version: v1.28.4
	I0116 03:48:53.503337   12076 api_server.go:131] duration metric: took 13.243ms to wait for apiserver health ...
	I0116 03:48:53.503337   12076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:48:53.512503   12076 system_pods.go:59] 4 kube-system pods found
	I0116 03:48:53.512503   12076 system_pods.go:61] "etcd-cert-expiration-279800" [80ff9e96-6e77-4827-9af2-300ff59ea546] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:48:53.512503   12076 system_pods.go:61] "kube-apiserver-cert-expiration-279800" [b5bf27f3-6bf3-4646-b310-b40cef8ec462] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:48:53.512503   12076 system_pods.go:61] "kube-controller-manager-cert-expiration-279800" [5253353c-aa77-4792-9ae1-f770d5138969] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:48:53.512503   12076 system_pods.go:61] "kube-scheduler-cert-expiration-279800" [f9075791-880d-4537-998e-1dd22240b525] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:48:53.512503   12076 system_pods.go:74] duration metric: took 9.1663ms to wait for pod list to return data ...
	I0116 03:48:53.512503   12076 kubeadm.go:581] duration metric: took 974.296ms to wait for : map[apiserver:true system_pods:true] ...
	I0116 03:48:53.512503   12076 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:48:53.517533   12076 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:48:53.517533   12076 node_conditions.go:123] node cpu capacity is 2
	I0116 03:48:53.517590   12076 node_conditions.go:105] duration metric: took 5.0304ms to run NodePressure ...
	I0116 03:48:53.517590   12076 start.go:228] waiting for startup goroutines ...
	I0116 03:48:54.924151   12076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:48:54.924151   12076 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:48:54.924151   12076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:48:54.924151   12076 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:48:54.925508   12076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:48:54.926321   12076 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:48:54.926321   12076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 03:48:54.926382   12076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-279800 ).state
	I0116 03:48:54.926742   12076 addons.go:234] Setting addon default-storageclass=true in "cert-expiration-279800"
	I0116 03:48:54.926796   12076 host.go:66] Checking if "cert-expiration-279800" exists ...
	I0116 03:48:54.927714   12076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-279800 ).state
	I0116 03:48:51.317363    4960 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0116 03:48:51.317363    4960 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0116 03:48:51.317363    4960 cache.go:56] Caching tarball of preloaded images
	I0116 03:48:51.317363    4960 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0116 03:48:51.317363    4960 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0116 03:48:51.317363    4960 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-069600\config.json ...
	I0116 03:48:51.318367    4960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-069600\config.json: {Name:mk37f334e58738f400958c2d178e6d7fcf0a4bb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:48:51.319371    4960 start.go:365] acquiring machines lock for kubernetes-upgrade-069600: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:48:50.975995    7880 provision.go:172] copyRemoteCerts
	I0116 03:48:50.988994    7880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:48:50.988994    7880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-072400 ).state
	I0116 03:48:53.951220    7880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:48:53.951418    7880 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:48:53.951418    7880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-072400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:48:57.230655   12076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:48:57.230874   12076 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:48:57.230982   12076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-279800 ).networkadapters[0]).ipaddresses[0]
	I0116 03:48:57.230982   12076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:48:57.231039   12076 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:48:57.231039   12076 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 03:48:57.231039   12076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 03:48:57.231039   12076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-279800 ).state
	I0116 03:48:59.526355   12076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:48:59.526355   12076 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:48:59.526355   12076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-279800 ).networkadapters[0]).ipaddresses[0]
	I0116 03:49:00.002742   12076 main.go:141] libmachine: [stdout =====>] : 172.27.112.221
	
	I0116 03:49:00.002742   12076 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:00.002994   12076 sshutil.go:53] new ssh client: &{IP:172.27.112.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\cert-expiration-279800\id_rsa Username:docker}
	I0116 03:49:00.150104   12076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 03:48:56.724492    7880 main.go:141] libmachine: [stdout =====>] : 172.27.121.210
	
	I0116 03:48:56.724598    7880 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:48:56.724598    7880 sshutil.go:53] new ssh client: &{IP:172.27.121.210 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\cert-options-072400\id_rsa Username:docker}
	I0116 03:48:56.835894    7880 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.8468611s)
	I0116 03:48:56.835894    7880 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:48:56.878891    7880 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:48:56.919947    7880 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0116 03:48:56.960083    7880 provision.go:86] duration metric: configureAuth took 16.5295132s
	I0116 03:48:56.960083    7880 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:48:56.960781    7880 config.go:182] Loaded profile config "cert-options-072400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:48:56.960781    7880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-072400 ).state
	I0116 03:48:59.288805    7880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:48:59.288805    7880 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:48:59.288922    7880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-072400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:49:02.181653   12076 main.go:141] libmachine: [stdout =====>] : 172.27.112.221
	
	I0116 03:49:02.181653   12076 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:02.181900   12076 sshutil.go:53] new ssh client: &{IP:172.27.112.221 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\cert-expiration-279800\id_rsa Username:docker}
	I0116 03:49:02.328330   12076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 03:49:02.782245   12076 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0116 03:49:02.782385   12076 addons.go:505] enable addons completed in 10.787818s: enabled=[storage-provisioner default-storageclass]
	I0116 03:49:02.782385   12076 start.go:233] waiting for cluster config update ...
	I0116 03:49:02.782385   12076 start.go:242] writing updated cluster config ...
	I0116 03:49:02.798027   12076 ssh_runner.go:195] Run: rm -f paused
	I0116 03:49:02.958502   12076 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:49:02.959249   12076 out.go:177] * Done! kubectl is now configured to use "cert-expiration-279800" cluster and "default" namespace by default
	I0116 03:49:02.021396    7880 main.go:141] libmachine: [stdout =====>] : 172.27.121.210
	
	I0116 03:49:02.021433    7880 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:02.028505    7880 main.go:141] libmachine: Using SSH client type: native
	I0116 03:49:02.029522    7880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.121.210 22 <nil> <nil>}
	I0116 03:49:02.029522    7880 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0116 03:49:02.194112    7880 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0116 03:49:02.194164    7880 buildroot.go:70] root file system type: tmpfs
	I0116 03:49:02.194259    7880 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0116 03:49:02.194259    7880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-072400 ).state
	I0116 03:49:04.460138    7880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:49:04.460267    7880 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:04.460382    7880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-072400 ).networkadapters[0]).ipaddresses[0]
	I0116 03:49:07.193202    7880 main.go:141] libmachine: [stdout =====>] : 172.27.121.210
	
	I0116 03:49:07.193202    7880 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:07.199359    7880 main.go:141] libmachine: Using SSH client type: native
	I0116 03:49:07.200104    7880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.121.210 22 <nil> <nil>}
	I0116 03:49:07.200656    7880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0116 03:49:07.382035    7880 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0116 03:49:07.382103    7880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-options-072400 ).state
	I0116 03:49:09.594876    7880 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:49:09.594876    7880 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:09.595123    7880 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-options-072400 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-01-16 03:40:40 UTC, ends at Tue 2024-01-16 03:49:23 UTC. --
	Jan 16 03:49:03 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:03.652152406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:49:03 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:03.944761599Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 03:49:03 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:03.945007505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:49:03 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:03.945124908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 03:49:03 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:03.945215111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:49:03 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:03.970119874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 03:49:03 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:03.970505584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:49:03 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:03.970642288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 03:49:03 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:03.970747291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:49:04 cert-expiration-279800 cri-dockerd[1648]: time="2024-01-16T03:49:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/757d21db40939b40ba2be31f6d2f76dfc6162d44846f327f4c740f61ad5af840/resolv.conf as [nameserver 172.27.112.1]"
	Jan 16 03:49:04 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:04.391778879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 03:49:04 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:04.392776605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:49:04 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:04.395341772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 03:49:04 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:04.395440674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:49:04 cert-expiration-279800 cri-dockerd[1648]: time="2024-01-16T03:49:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4b9fe218f327471cbacf3e5f15911eb8fe65f1705c34c72afe6e9d1f8a90ce2f/resolv.conf as [nameserver 172.27.112.1]"
	Jan 16 03:49:04 cert-expiration-279800 cri-dockerd[1648]: time="2024-01-16T03:49:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fbbb31499662feb2791487e0487f9b9649a3a551c859fb9fcb6d24e6a48fab86/resolv.conf as [nameserver 172.27.112.1]"
	Jan 16 03:49:04 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:04.950416035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 03:49:04 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:04.967992693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:49:04 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:04.968207698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 03:49:04 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:04.968632309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:49:05 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:05.206215586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 03:49:05 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:05.207057807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:49:05 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:05.207175410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 03:49:05 cert-expiration-279800 dockerd[1764]: time="2024-01-16T03:49:05.207233212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:49:12 cert-expiration-279800 cri-dockerd[1648]: time="2024-01-16T03:49:12Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb2d9c2127c6c       ead0a4a53df89       18 seconds ago      Running             coredns                   0                   fbbb31499662f       coredns-5dd5756b68-vpwqp
	663e479336cc5       6e38f40d628db       19 seconds ago      Running             storage-provisioner       0                   4b9fe218f3274       storage-provisioner
	ae986abf228d8       83f6cc407eed8       19 seconds ago      Running             kube-proxy                0                   757d21db40939       kube-proxy-pd98j
	898f0c6462a8e       e3db313c6dbc0       40 seconds ago      Running             kube-scheduler            0                   ee82b7d320c46       kube-scheduler-cert-expiration-279800
	66762f53cc748       73deb9a3f7025       40 seconds ago      Running             etcd                      0                   5cc731527c6e8       etcd-cert-expiration-279800
	e3d753606bf38       d058aa5ab969c       40 seconds ago      Running             kube-controller-manager   0                   401b2d54eb149       kube-controller-manager-cert-expiration-279800
	bcc177794c277       7fe0e6f37db33       40 seconds ago      Running             kube-apiserver            0                   3d482367b5db4       kube-apiserver-cert-expiration-279800
	
	
	==> coredns [bb2d9c2127c6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e76cd1f4241fbd336d5e1d56170ae69e8389ff4197cb4bacea4ab86ce4c2ec8f58098e2106677580c06728ae57d9f0250db8f5c40e7a5cff291fc37d7d4dfe8b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49398 - 7858 "HINFO IN 4385474082047798529.8160541282544555452. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.047168077s
	
	
	==> describe nodes <==
	Name:               cert-expiration-279800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=cert-expiration-279800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=cert-expiration-279800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T03_48_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:48:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  cert-expiration-279800
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:49:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:49:12 +0000   Tue, 16 Jan 2024 03:48:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:49:12 +0000   Tue, 16 Jan 2024 03:48:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:49:12 +0000   Tue, 16 Jan 2024 03:48:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:49:12 +0000   Tue, 16 Jan 2024 03:48:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.112.221
	  Hostname:    cert-expiration-279800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	System Info:
	  Machine ID:                 637b6248c64248a7bc4071b39419046c
	  System UUID:                28d6d632-90d5-7442-a51e-6fd422e9ab09
	  Boot ID:                    37d2f589-bc86-42ba-b60f-f80ac7f6eb41
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-vpwqp                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20s
	  kube-system                 etcd-cert-expiration-279800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         32s
	  kube-system                 kube-apiserver-cert-expiration-279800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-controller-manager-cert-expiration-279800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-proxy-pd98j                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 kube-scheduler-cert-expiration-279800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node cert-expiration-279800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node cert-expiration-279800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node cert-expiration-279800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  32s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                30s   kubelet          Node cert-expiration-279800 status is now: NodeReady
	  Normal  RegisteredNode           21s   node-controller  Node cert-expiration-279800 event: Registered Node cert-expiration-279800 in Controller
	
	
	==> dmesg <==
	[  +1.213695] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +8.208863] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan16 03:41] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.155398] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[Jan16 03:42] systemd-fstab-generator[951]: Ignoring "noauto" for root device
	[  +0.635867] systemd-fstab-generator[992]: Ignoring "noauto" for root device
	[  +0.181116] systemd-fstab-generator[1003]: Ignoring "noauto" for root device
	[  +0.211396] systemd-fstab-generator[1016]: Ignoring "noauto" for root device
	[Jan16 03:47] systemd-fstab-generator[1381]: Ignoring "noauto" for root device
	[  +0.607904] systemd-fstab-generator[1419]: Ignoring "noauto" for root device
	[  +0.177315] systemd-fstab-generator[1430]: Ignoring "noauto" for root device
	[  +0.224323] systemd-fstab-generator[1443]: Ignoring "noauto" for root device
	[Jan16 03:48] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.788634] systemd-fstab-generator[1603]: Ignoring "noauto" for root device
	[  +0.196410] systemd-fstab-generator[1614]: Ignoring "noauto" for root device
	[  +0.220281] systemd-fstab-generator[1625]: Ignoring "noauto" for root device
	[  +0.282980] systemd-fstab-generator[1640]: Ignoring "noauto" for root device
	[ +22.024908] systemd-fstab-generator[1749]: Ignoring "noauto" for root device
	[  +3.038416] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.616463] systemd-fstab-generator[2127]: Ignoring "noauto" for root device
	[  +0.524320] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.355648] systemd-fstab-generator[3085]: Ignoring "noauto" for root device
	
	
	==> etcd [66762f53cc74] <==
	{"level":"info","ts":"2024-01-16T03:48:44.79577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a51c314ea6c826d0 switched to configuration voters=(11897438529481352912)"}
	{"level":"info","ts":"2024-01-16T03:48:44.795906Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5b0c0f216f11f4bc","local-member-id":"a51c314ea6c826d0","added-peer-id":"a51c314ea6c826d0","added-peer-peer-urls":["https://172.27.112.221:2380"]}
	{"level":"info","ts":"2024-01-16T03:48:44.80997Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-16T03:48:44.81331Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.27.112.221:2380"}
	{"level":"info","ts":"2024-01-16T03:48:44.81362Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.27.112.221:2380"}
	{"level":"info","ts":"2024-01-16T03:48:44.814512Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a51c314ea6c826d0","initial-advertise-peer-urls":["https://172.27.112.221:2380"],"listen-peer-urls":["https://172.27.112.221:2380"],"advertise-client-urls":["https://172.27.112.221:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.112.221:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-16T03:48:44.817097Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-16T03:48:45.016389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a51c314ea6c826d0 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-16T03:48:45.016542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a51c314ea6c826d0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-16T03:48:45.016664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a51c314ea6c826d0 received MsgPreVoteResp from a51c314ea6c826d0 at term 1"}
	{"level":"info","ts":"2024-01-16T03:48:45.016683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a51c314ea6c826d0 became candidate at term 2"}
	{"level":"info","ts":"2024-01-16T03:48:45.016691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a51c314ea6c826d0 received MsgVoteResp from a51c314ea6c826d0 at term 2"}
	{"level":"info","ts":"2024-01-16T03:48:45.016747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a51c314ea6c826d0 became leader at term 2"}
	{"level":"info","ts":"2024-01-16T03:48:45.016759Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a51c314ea6c826d0 elected leader a51c314ea6c826d0 at term 2"}
	{"level":"info","ts":"2024-01-16T03:48:45.022404Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:48:45.030538Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a51c314ea6c826d0","local-member-attributes":"{Name:cert-expiration-279800 ClientURLs:[https://172.27.112.221:2379]}","request-path":"/0/members/a51c314ea6c826d0/attributes","cluster-id":"5b0c0f216f11f4bc","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-16T03:48:45.03065Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:48:45.044351Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-16T03:48:45.046338Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T03:48:45.046359Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T03:48:45.047262Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T03:48:45.047428Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5b0c0f216f11f4bc","local-member-id":"a51c314ea6c826d0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:48:45.047547Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:48:45.047856Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T03:48:45.052947Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.112.221:2379"}
	
	
	==> kernel <==
	 03:49:23 up 8 min,  0 users,  load average: 1.13, 0.40, 0.13
	Linux cert-expiration-279800 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [bcc177794c27] <==
	I0116 03:48:47.567103       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0116 03:48:47.567246       1 shared_informer.go:318] Caches are synced for configmaps
	I0116 03:48:47.589595       1 controller.go:624] quota admission added evaluator for: namespaces
	I0116 03:48:47.611140       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 03:48:47.612193       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0116 03:48:47.638369       1 cache.go:39] Caches are synced for autoregister controller
	I0116 03:48:47.672822       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0116 03:48:47.673650       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0116 03:48:47.673663       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0116 03:48:47.673810       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0116 03:48:48.474441       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0116 03:48:48.483929       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0116 03:48:48.483948       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0116 03:48:49.343689       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 03:48:49.403421       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0116 03:48:49.511881       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0116 03:48:49.523217       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.27.112.221]
	I0116 03:48:49.524904       1 controller.go:624] quota admission added evaluator for: endpoints
	I0116 03:48:49.530618       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 03:48:49.578479       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0116 03:48:51.129950       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0116 03:48:51.149253       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0116 03:48:51.185263       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0116 03:49:03.243681       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0116 03:49:03.349791       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [e3d753606bf3] <==
	I0116 03:49:02.538758       1 shared_informer.go:318] Caches are synced for HPA
	I0116 03:49:02.538882       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0116 03:49:02.539991       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0116 03:49:02.547379       1 shared_informer.go:318] Caches are synced for service account
	I0116 03:49:02.552487       1 shared_informer.go:318] Caches are synced for disruption
	I0116 03:49:02.558870       1 shared_informer.go:318] Caches are synced for persistent volume
	I0116 03:49:02.559152       1 shared_informer.go:318] Caches are synced for attach detach
	I0116 03:49:02.587926       1 shared_informer.go:318] Caches are synced for resource quota
	I0116 03:49:02.588581       1 shared_informer.go:318] Caches are synced for resource quota
	I0116 03:49:02.626712       1 shared_informer.go:318] Caches are synced for cronjob
	I0116 03:49:02.626725       1 shared_informer.go:318] Caches are synced for daemon sets
	I0116 03:49:02.632501       1 shared_informer.go:318] Caches are synced for stateful set
	I0116 03:49:02.975363       1 shared_informer.go:318] Caches are synced for garbage collector
	I0116 03:49:02.975577       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0116 03:49:03.044513       1 shared_informer.go:318] Caches are synced for garbage collector
	I0116 03:49:03.252071       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 1"
	I0116 03:49:03.390978       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pd98j"
	I0116 03:49:03.461217       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vpwqp"
	I0116 03:49:03.491648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="241.690036ms"
	I0116 03:49:03.512772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.03686ms"
	I0116 03:49:03.512862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.401µs"
	I0116 03:49:03.518133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.602µs"
	I0116 03:49:05.807023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.301µs"
	I0116 03:49:05.846750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.716126ms"
	I0116 03:49:05.847330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="495.712µs"
	
	
	==> kube-proxy [ae986abf228d] <==
	I0116 03:49:04.800579       1 server_others.go:69] "Using iptables proxy"
	I0116 03:49:04.824700       1 node.go:141] Successfully retrieved node IP: 172.27.112.221
	I0116 03:49:04.938854       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 03:49:04.938901       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:49:04.944043       1 server_others.go:152] "Using iptables Proxier"
	I0116 03:49:04.944109       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:49:04.944439       1 server.go:846] "Version info" version="v1.28.4"
	I0116 03:49:04.944478       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:49:04.945813       1 config.go:188] "Starting service config controller"
	I0116 03:49:04.945861       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:49:04.945904       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:49:04.945912       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:49:04.951461       1 config.go:315] "Starting node config controller"
	I0116 03:49:04.951501       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:49:05.047259       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:49:05.047303       1 shared_informer.go:318] Caches are synced for service config
	I0116 03:49:05.053471       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [898f0c6462a8] <==
	W0116 03:48:47.638477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 03:48:47.638494       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 03:48:48.458667       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 03:48:48.458703       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 03:48:48.474191       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 03:48:48.474523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 03:48:48.474605       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 03:48:48.474619       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 03:48:48.622087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0116 03:48:48.622179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0116 03:48:48.671614       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0116 03:48:48.671692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0116 03:48:48.679183       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 03:48:48.679235       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:48:48.727016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0116 03:48:48.727358       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0116 03:48:48.775263       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0116 03:48:48.775354       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0116 03:48:48.879120       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 03:48:48.879255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 03:48:48.898905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 03:48:48.898937       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 03:48:48.950710       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 03:48:48.951110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0116 03:48:50.818227       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:40:40 UTC, ends at Tue 2024-01-16 03:49:23 UTC. --
	Jan 16 03:48:52 cert-expiration-279800 kubelet[3117]: I0116 03:48:52.375071    3117 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Jan 16 03:48:52 cert-expiration-279800 kubelet[3117]: I0116 03:48:52.729435    3117 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-cert-expiration-279800" podStartSLOduration=1.729338309 podCreationTimestamp="2024-01-16 03:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 03:48:52.714536701 +0000 UTC m=+1.629107887" watchObservedRunningTime="2024-01-16 03:48:52.729338309 +0000 UTC m=+1.643909395"
	Jan 16 03:48:52 cert-expiration-279800 kubelet[3117]: I0116 03:48:52.748319    3117 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-cert-expiration-279800" podStartSLOduration=1.748214157 podCreationTimestamp="2024-01-16 03:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 03:48:52.732416614 +0000 UTC m=+1.646987700" watchObservedRunningTime="2024-01-16 03:48:52.748214157 +0000 UTC m=+1.662785243"
	Jan 16 03:48:52 cert-expiration-279800 kubelet[3117]: I0116 03:48:52.771483    3117 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-cert-expiration-279800" podStartSLOduration=1.771401453 podCreationTimestamp="2024-01-16 03:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 03:48:52.7494784 +0000 UTC m=+1.664049486" watchObservedRunningTime="2024-01-16 03:48:52.771401453 +0000 UTC m=+1.685972539"
	Jan 16 03:48:52 cert-expiration-279800 kubelet[3117]: I0116 03:48:52.820993    3117 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-cert-expiration-279800" podStartSLOduration=1.820942553 podCreationTimestamp="2024-01-16 03:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 03:48:52.772712898 +0000 UTC m=+1.687284084" watchObservedRunningTime="2024-01-16 03:48:52.820942553 +0000 UTC m=+1.735513639"
	Jan 16 03:48:53 cert-expiration-279800 kubelet[3117]: I0116 03:48:53.669942    3117 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 16 03:49:02 cert-expiration-279800 kubelet[3117]: I0116 03:49:02.566958    3117 topology_manager.go:215] "Topology Admit Handler" podUID="3d679b75-9786-4936-9147-82e563a3c80d" podNamespace="kube-system" podName="storage-provisioner"
	Jan 16 03:49:02 cert-expiration-279800 kubelet[3117]: I0116 03:49:02.694888    3117 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3d679b75-9786-4936-9147-82e563a3c80d-tmp\") pod \"storage-provisioner\" (UID: \"3d679b75-9786-4936-9147-82e563a3c80d\") " pod="kube-system/storage-provisioner"
	Jan 16 03:49:02 cert-expiration-279800 kubelet[3117]: I0116 03:49:02.694972    3117 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdfsx\" (UniqueName: \"kubernetes.io/projected/3d679b75-9786-4936-9147-82e563a3c80d-kube-api-access-rdfsx\") pod \"storage-provisioner\" (UID: \"3d679b75-9786-4936-9147-82e563a3c80d\") " pod="kube-system/storage-provisioner"
	Jan 16 03:49:02 cert-expiration-279800 kubelet[3117]: E0116 03:49:02.809555    3117 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 16 03:49:02 cert-expiration-279800 kubelet[3117]: E0116 03:49:02.809710    3117 projected.go:198] Error preparing data for projected volume kube-api-access-rdfsx for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jan 16 03:49:02 cert-expiration-279800 kubelet[3117]: E0116 03:49:02.809848    3117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3d679b75-9786-4936-9147-82e563a3c80d-kube-api-access-rdfsx podName:3d679b75-9786-4936-9147-82e563a3c80d nodeName:}" failed. No retries permitted until 2024-01-16 03:49:03.309766559 +0000 UTC m=+12.224337645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rdfsx" (UniqueName: "kubernetes.io/projected/3d679b75-9786-4936-9147-82e563a3c80d-kube-api-access-rdfsx") pod "storage-provisioner" (UID: "3d679b75-9786-4936-9147-82e563a3c80d") : configmap "kube-root-ca.crt" not found
	Jan 16 03:49:03 cert-expiration-279800 kubelet[3117]: I0116 03:49:03.409713    3117 topology_manager.go:215] "Topology Admit Handler" podUID="5a668c9d-632e-48d4-be68-7f0574db6164" podNamespace="kube-system" podName="kube-proxy-pd98j"
	Jan 16 03:49:03 cert-expiration-279800 kubelet[3117]: I0116 03:49:03.469978    3117 topology_manager.go:215] "Topology Admit Handler" podUID="49bea890-46b9-4866-88fb-4c60aae0ab38" podNamespace="kube-system" podName="coredns-5dd5756b68-vpwqp"
	Jan 16 03:49:03 cert-expiration-279800 kubelet[3117]: I0116 03:49:03.503416    3117 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvzwf\" (UniqueName: \"kubernetes.io/projected/49bea890-46b9-4866-88fb-4c60aae0ab38-kube-api-access-xvzwf\") pod \"coredns-5dd5756b68-vpwqp\" (UID: \"49bea890-46b9-4866-88fb-4c60aae0ab38\") " pod="kube-system/coredns-5dd5756b68-vpwqp"
	Jan 16 03:49:03 cert-expiration-279800 kubelet[3117]: I0116 03:49:03.503599    3117 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a668c9d-632e-48d4-be68-7f0574db6164-lib-modules\") pod \"kube-proxy-pd98j\" (UID: \"5a668c9d-632e-48d4-be68-7f0574db6164\") " pod="kube-system/kube-proxy-pd98j"
	Jan 16 03:49:03 cert-expiration-279800 kubelet[3117]: I0116 03:49:03.503712    3117 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5a668c9d-632e-48d4-be68-7f0574db6164-kube-proxy\") pod \"kube-proxy-pd98j\" (UID: \"5a668c9d-632e-48d4-be68-7f0574db6164\") " pod="kube-system/kube-proxy-pd98j"
	Jan 16 03:49:03 cert-expiration-279800 kubelet[3117]: I0116 03:49:03.503931    3117 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67tbc\" (UniqueName: \"kubernetes.io/projected/5a668c9d-632e-48d4-be68-7f0574db6164-kube-api-access-67tbc\") pod \"kube-proxy-pd98j\" (UID: \"5a668c9d-632e-48d4-be68-7f0574db6164\") " pod="kube-system/kube-proxy-pd98j"
	Jan 16 03:49:03 cert-expiration-279800 kubelet[3117]: I0116 03:49:03.504230    3117 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49bea890-46b9-4866-88fb-4c60aae0ab38-config-volume\") pod \"coredns-5dd5756b68-vpwqp\" (UID: \"49bea890-46b9-4866-88fb-4c60aae0ab38\") " pod="kube-system/coredns-5dd5756b68-vpwqp"
	Jan 16 03:49:03 cert-expiration-279800 kubelet[3117]: I0116 03:49:03.504574    3117 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a668c9d-632e-48d4-be68-7f0574db6164-xtables-lock\") pod \"kube-proxy-pd98j\" (UID: \"5a668c9d-632e-48d4-be68-7f0574db6164\") " pod="kube-system/kube-proxy-pd98j"
	Jan 16 03:49:04 cert-expiration-279800 kubelet[3117]: I0116 03:49:04.742165    3117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b9fe218f327471cbacf3e5f15911eb8fe65f1705c34c72afe6e9d1f8a90ce2f"
	Jan 16 03:49:05 cert-expiration-279800 kubelet[3117]: I0116 03:49:05.809004    3117 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=5.808954355 podCreationTimestamp="2024-01-16 03:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 03:49:05.781134946 +0000 UTC m=+14.695706032" watchObservedRunningTime="2024-01-16 03:49:05.808954355 +0000 UTC m=+14.723525441"
	Jan 16 03:49:05 cert-expiration-279800 kubelet[3117]: I0116 03:49:05.831995    3117 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vpwqp" podStartSLOduration=2.831954942 podCreationTimestamp="2024-01-16 03:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 03:49:05.810864104 +0000 UTC m=+14.725435290" watchObservedRunningTime="2024-01-16 03:49:05.831954942 +0000 UTC m=+14.746526028"
	Jan 16 03:49:12 cert-expiration-279800 kubelet[3117]: I0116 03:49:12.478311    3117 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 16 03:49:12 cert-expiration-279800 kubelet[3117]: I0116 03:49:12.479549    3117 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	
	
	==> storage-provisioner [663e479336cc] <==
	I0116 03:49:05.430662       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0116 03:49:05.461037       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0116 03:49:05.462247       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0116 03:49:05.476002       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0116 03:49:05.477646       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_cert-expiration-279800_784eb7b1-52dd-4b8c-ad75-94979d8979c0!
	I0116 03:49:05.477895       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a797c794-617d-4e6d-9db6-316613bae21b", APIVersion:"v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' cert-expiration-279800_784eb7b1-52dd-4b8c-ad75-94979d8979c0 became leader
	I0116 03:49:05.583558       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_cert-expiration-279800_784eb7b1-52dd-4b8c-ad75-94979d8979c0!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:49:15.623781    8516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-expiration-279800 -n cert-expiration-279800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p cert-expiration-279800 -n cert-expiration-279800: (12.6885847s)
helpers_test.go:261: (dbg) Run:  kubectl --context cert-expiration-279800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestCertExpiration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "cert-expiration-279800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-279800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-279800: (42.1853121s)
--- FAIL: TestCertExpiration (890.19s)

                                                
                                    
x
+
TestForceSystemdEnv (425.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-378000 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-env-378000 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: exit status 90 (4m50.228089s)

                                                
                                                
-- stdout --
	* [force-systemd-env-378000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node force-systemd-env-378000 in cluster force-systemd-env-378000
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:48:17.156150    5168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0116 03:48:17.249232    5168 out.go:296] Setting OutFile to fd 1688 ...
	I0116 03:48:17.249232    5168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:48:17.249232    5168 out.go:309] Setting ErrFile to fd 1540...
	I0116 03:48:17.249232    5168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:48:17.275252    5168 out.go:303] Setting JSON to false
	I0116 03:48:17.278235    5168 start.go:128] hostinfo: {"hostname":"minikube3","uptime":55288,"bootTime":1705321609,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0116 03:48:17.278235    5168 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0116 03:48:17.457254    5168 out.go:177] * [force-systemd-env-378000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0116 03:48:17.458901    5168 notify.go:220] Checking for updates...
	I0116 03:48:17.507664    5168 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:48:17.559391    5168 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0116 03:48:17.609232    5168 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:48:17.659512    5168 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:48:17.660525    5168 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0116 03:48:17.711407    5168 config.go:182] Loaded profile config "cert-expiration-279800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:48:17.711569    5168 config.go:182] Loaded profile config "cert-options-072400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:48:17.712338    5168 config.go:182] Loaded profile config "pause-143300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:48:17.712557    5168 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:48:23.406545    5168 out.go:177] * Using the hyperv driver based on user configuration
	I0116 03:48:23.407244    5168 start.go:298] selected driver: hyperv
	I0116 03:48:23.407244    5168 start.go:902] validating driver "hyperv" against <nil>
	I0116 03:48:23.407244    5168 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:48:23.472609    5168 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 03:48:23.474376    5168 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 03:48:23.474454    5168 cni.go:84] Creating CNI manager for ""
	I0116 03:48:23.474518    5168 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0116 03:48:23.474559    5168 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 03:48:23.474559    5168 start_flags.go:321] config:
	{Name:force-systemd-env-378000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-378000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:48:23.474612    5168 iso.go:125] acquiring lock: {Name:mk2c0b62d272a573835231fdc54419c800e07e34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:48:23.511784    5168 out.go:177] * Starting control plane node force-systemd-env-378000 in cluster force-systemd-env-378000
	I0116 03:48:23.512439    5168 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0116 03:48:23.513554    5168 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0116 03:48:23.513697    5168 cache.go:56] Caching tarball of preloaded images
	I0116 03:48:23.513697    5168 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0116 03:48:23.514393    5168 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0116 03:48:23.515011    5168 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\force-systemd-env-378000\config.json ...
	I0116 03:48:23.515011    5168 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\force-systemd-env-378000\config.json: {Name:mke941783abb0713276ad79aa701b7a1f0a67a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:48:23.516403    5168 start.go:365] acquiring machines lock for force-systemd-env-378000: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:49:33.427184    5168 start.go:369] acquired machines lock for "force-systemd-env-378000" in 1m9.9103193s
	I0116 03:49:33.427601    5168 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-378000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-378000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0116 03:49:33.427601    5168 start.go:125] createHost starting for "" (driver="hyperv")
	I0116 03:49:33.429191    5168 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0116 03:49:33.429547    5168 start.go:159] libmachine.API.Create for "force-systemd-env-378000" (driver="hyperv")
	I0116 03:49:33.429654    5168 client.go:168] LocalClient.Create starting
	I0116 03:49:33.430220    5168 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0116 03:49:33.430482    5168 main.go:141] libmachine: Decoding PEM data...
	I0116 03:49:33.430548    5168 main.go:141] libmachine: Parsing certificate...
	I0116 03:49:33.430808    5168 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0116 03:49:33.431047    5168 main.go:141] libmachine: Decoding PEM data...
	I0116 03:49:33.431117    5168 main.go:141] libmachine: Parsing certificate...
	I0116 03:49:33.431117    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0116 03:49:35.541600    5168 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0116 03:49:35.541957    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:35.542097    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0116 03:49:37.507203    5168 main.go:141] libmachine: [stdout =====>] : False
	
	I0116 03:49:37.507260    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:37.507329    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0116 03:49:39.176384    5168 main.go:141] libmachine: [stdout =====>] : True
	
	I0116 03:49:39.176668    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:39.176841    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0116 03:49:43.345713    5168 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0116 03:49:43.345782    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:43.349827    5168 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 03:49:43.783925    5168 main.go:141] libmachine: Creating SSH key...
	I0116 03:49:44.001367    5168 main.go:141] libmachine: Creating VM...
	I0116 03:49:44.001367    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0116 03:49:47.173537    5168 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0116 03:49:47.173605    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:47.173670    5168 main.go:141] libmachine: Using switch "Default Switch"
	I0116 03:49:47.173772    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0116 03:49:49.091341    5168 main.go:141] libmachine: [stdout =====>] : True
	
	I0116 03:49:49.091341    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:49.091341    5168 main.go:141] libmachine: Creating VHD
	I0116 03:49:49.091341    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\force-systemd-env-378000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0116 03:49:53.437142    5168 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\force-systemd-env-378000\f
	                          ixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : A8B1441F-EE28-4AC7-9092-7D95FFFC92A6
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0116 03:49:53.437283    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:53.437283    5168 main.go:141] libmachine: Writing magic tar header
	I0116 03:49:53.437283    5168 main.go:141] libmachine: Writing SSH key tar header
	I0116 03:49:53.446489    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\force-systemd-env-378000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\force-systemd-env-378000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0116 03:49:56.809953    5168 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:49:56.809953    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:56.809953    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\force-systemd-env-378000\disk.vhd' -SizeBytes 20000MB
	I0116 03:49:59.517476    5168 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:49:59.517634    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:49:59.517634    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM force-systemd-env-378000 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\force-systemd-env-378000' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0116 03:50:05.118667    5168 main.go:141] libmachine: [stdout =====>] : 
	Name                     State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                     ----- ----------- ----------------- ------   ------             -------
	force-systemd-env-378000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0116 03:50:05.118838    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:05.118902    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName force-systemd-env-378000 -DynamicMemoryEnabled $false
	I0116 03:50:07.559747    5168 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:50:07.559747    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:07.559747    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor force-systemd-env-378000 -Count 2
	I0116 03:50:09.914239    5168 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:50:09.914239    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:09.914515    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName force-systemd-env-378000 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\force-systemd-env-378000\boot2docker.iso'
	I0116 03:50:12.674657    5168 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:50:12.674657    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:12.674657    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName force-systemd-env-378000 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\force-systemd-env-378000\disk.vhd'
	I0116 03:50:15.489871    5168 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:50:15.490124    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:15.490124    5168 main.go:141] libmachine: Starting VM...
	I0116 03:50:15.490124    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM force-systemd-env-378000
	I0116 03:50:19.333464    5168 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:50:19.333616    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:19.333616    5168 main.go:141] libmachine: Waiting for host to start...
	I0116 03:50:19.333677    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:50:21.727848    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:50:21.728178    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:21.728243    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:50:24.374505    5168 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:50:24.374627    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:25.375469    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:50:28.273020    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:50:28.273326    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:28.273387    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:50:30.879382    5168 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:50:30.879382    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:31.880387    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:50:34.134829    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:50:34.134829    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:34.135904    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:50:36.988211    5168 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:50:36.988508    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:38.003229    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:50:40.374615    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:50:40.374699    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:40.374819    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:50:43.121093    5168 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:50:43.121093    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:44.136378    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:50:46.407431    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:50:46.407431    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:46.407431    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:50:49.258423    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:50:49.258423    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:49.258423    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:50:51.470668    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:50:51.470926    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:51.470926    5168 machine.go:88] provisioning docker machine ...
	I0116 03:50:51.470926    5168 buildroot.go:166] provisioning hostname "force-systemd-env-378000"
	I0116 03:50:51.471042    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:50:53.671653    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:50:53.671653    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:53.671791    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:50:56.337325    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:50:56.337325    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:56.342592    5168 main.go:141] libmachine: Using SSH client type: native
	I0116 03:50:56.342932    5168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.114.192 22 <nil> <nil>}
	I0116 03:50:56.342932    5168 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-378000 && echo "force-systemd-env-378000" | sudo tee /etc/hostname
	I0116 03:50:56.524414    5168 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-378000
	
	I0116 03:50:56.524543    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:50:58.782419    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:50:58.782705    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:50:58.782905    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:51:01.430893    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:51:01.431075    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:01.437273    5168 main.go:141] libmachine: Using SSH client type: native
	I0116 03:51:01.438020    5168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.114.192 22 <nil> <nil>}
	I0116 03:51:01.438020    5168 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-378000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-378000/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-378000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:51:01.591431    5168 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:51:01.591431    5168 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0116 03:51:01.591431    5168 buildroot.go:174] setting up certificates
	I0116 03:51:01.591431    5168 provision.go:83] configureAuth start
	I0116 03:51:01.591431    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:51:03.788523    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:51:03.788523    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:03.788637    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:51:06.417038    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:51:06.417038    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:06.417038    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:51:08.567685    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:51:08.567685    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:08.567685    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:51:11.290474    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:51:11.290560    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:11.290560    5168 provision.go:138] copyHostCerts
	I0116 03:51:11.290833    5168 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0116 03:51:11.291184    5168 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0116 03:51:11.291184    5168 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0116 03:51:11.291673    5168 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0116 03:51:11.293813    5168 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0116 03:51:11.294062    5168 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0116 03:51:11.294164    5168 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0116 03:51:11.294461    5168 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0116 03:51:11.295674    5168 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0116 03:51:11.295885    5168 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0116 03:51:11.295885    5168 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0116 03:51:11.296283    5168 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0116 03:51:11.297190    5168 provision.go:112] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-env-378000 san=[172.27.114.192 172.27.114.192 localhost 127.0.0.1 minikube force-systemd-env-378000]
	I0116 03:51:11.549283    5168 provision.go:172] copyRemoteCerts
	I0116 03:51:11.565635    5168 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:51:11.565736    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:51:13.846007    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:51:13.846268    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:13.846268    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:51:16.432562    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:51:16.432752    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:16.433041    5168 sshutil.go:53] new ssh client: &{IP:172.27.114.192 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\force-systemd-env-378000\id_rsa Username:docker}
	I0116 03:51:16.543919    5168 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.978034s)
	I0116 03:51:16.543919    5168 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0116 03:51:16.544444    5168 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:51:16.592458    5168 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0116 03:51:16.592902    5168 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1245 bytes)
	I0116 03:51:16.634051    5168 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0116 03:51:16.634337    5168 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:51:16.675743    5168 provision.go:86] duration metric: configureAuth took 15.0842135s
	I0116 03:51:16.675867    5168 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:51:16.676232    5168 config.go:182] Loaded profile config "force-systemd-env-378000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:51:16.676232    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:51:18.879178    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:51:18.879178    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:18.879178    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:51:21.487606    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:51:21.487747    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:21.493821    5168 main.go:141] libmachine: Using SSH client type: native
	I0116 03:51:21.494793    5168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.114.192 22 <nil> <nil>}
	I0116 03:51:21.494793    5168 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0116 03:51:21.636826    5168 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0116 03:51:21.636826    5168 buildroot.go:70] root file system type: tmpfs
	I0116 03:51:21.636826    5168 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0116 03:51:21.636826    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:51:23.714733    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:51:23.714928    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:23.715247    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:51:26.293533    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:51:26.293778    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:26.300425    5168 main.go:141] libmachine: Using SSH client type: native
	I0116 03:51:26.301315    5168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.114.192 22 <nil> <nil>}
	I0116 03:51:26.301315    5168 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0116 03:51:26.462478    5168 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0116 03:51:26.462595    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:51:28.637584    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:51:28.637813    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:28.637813    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:51:31.226056    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:51:31.226056    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:31.232918    5168 main.go:141] libmachine: Using SSH client type: native
	I0116 03:51:31.233801    5168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.114.192 22 <nil> <nil>}
	I0116 03:51:31.233859    5168 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0116 03:51:34.805894    5168 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0116 03:51:34.805894    5168 machine.go:91] provisioned docker machine in 43.3346852s
	I0116 03:51:34.805894    5168 client.go:171] LocalClient.Create took 2m1.3754421s
	I0116 03:51:34.805894    5168 start.go:167] duration metric: libmachine.API.Create for "force-systemd-env-378000" took 2m1.3755493s
	I0116 03:51:34.805894    5168 start.go:300] post-start starting for "force-systemd-env-378000" (driver="hyperv")
	I0116 03:51:34.805894    5168 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:51:34.824285    5168 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:51:34.824285    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:51:37.061356    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:51:37.061551    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:37.061551    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:51:39.656358    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:51:39.656617    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:39.656866    5168 sshutil.go:53] new ssh client: &{IP:172.27.114.192 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\force-systemd-env-378000\id_rsa Username:docker}
	I0116 03:51:39.769947    5168 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9456286s)
	I0116 03:51:39.785269    5168 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:51:39.792197    5168 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:51:39.792197    5168 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0116 03:51:39.792790    5168 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0116 03:51:39.794113    5168 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> 135082.pem in /etc/ssl/certs
	I0116 03:51:39.794113    5168 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> /etc/ssl/certs/135082.pem
	I0116 03:51:39.808498    5168 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:51:39.824989    5168 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem --> /etc/ssl/certs/135082.pem (1708 bytes)
	I0116 03:51:39.872804    5168 start.go:303] post-start completed in 5.0668765s
	I0116 03:51:39.875946    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:51:42.029027    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:51:42.029027    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:42.029307    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:51:44.655300    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:51:44.655473    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:44.655737    5168 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\force-systemd-env-378000\config.json ...
	I0116 03:51:44.658605    5168 start.go:128] duration metric: createHost completed in 2m11.2301411s
	I0116 03:51:44.658728    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:51:46.837160    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:51:46.837355    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:46.837543    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:51:49.429436    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:51:49.429436    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:49.436644    5168 main.go:141] libmachine: Using SSH client type: native
	I0116 03:51:49.436889    5168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.114.192 22 <nil> <nil>}
	I0116 03:51:49.437502    5168 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0116 03:51:49.579098    5168 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705377109.579787757
	
	I0116 03:51:49.579098    5168 fix.go:206] guest clock: 1705377109.579787757
	I0116 03:51:49.579098    5168 fix.go:219] Guest: 2024-01-16 03:51:49.579787757 +0000 UTC Remote: 2024-01-16 03:51:44.6587283 +0000 UTC m=+207.630092101 (delta=4.921059457s)
	I0116 03:51:49.579098    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:51:51.722365    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:51:51.722656    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:51.722656    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:51:54.218579    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:51:54.218579    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:54.225151    5168 main.go:141] libmachine: Using SSH client type: native
	I0116 03:51:54.225415    5168 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.114.192 22 <nil> <nil>}
	I0116 03:51:54.225995    5168 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705377109
	I0116 03:51:54.375291    5168 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan 16 03:51:49 UTC 2024
	
	I0116 03:51:54.375291    5168 fix.go:226] clock set: Tue Jan 16 03:51:49 UTC 2024
	 (err=<nil>)
	I0116 03:51:54.375291    5168 start.go:83] releasing machines lock for "force-systemd-env-378000", held for 2m20.9470624s
	I0116 03:51:54.375815    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:51:56.558469    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:51:56.558524    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:56.558524    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:51:59.166137    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:51:59.166400    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:51:59.171040    5168 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:51:59.171040    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:51:59.183888    5168 ssh_runner.go:195] Run: cat /version.json
	I0116 03:51:59.183888    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-env-378000 ).state
	I0116 03:52:01.451206    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:52:01.451303    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:52:01.451303    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:52:01.474435    5168 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:52:01.474802    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:52:01.474802    5168 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-env-378000 ).networkadapters[0]).ipaddresses[0]
	I0116 03:52:04.172428    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:52:04.172428    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:52:04.172586    5168 sshutil.go:53] new ssh client: &{IP:172.27.114.192 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\force-systemd-env-378000\id_rsa Username:docker}
	I0116 03:52:04.235408    5168 main.go:141] libmachine: [stdout =====>] : 172.27.114.192
	
	I0116 03:52:04.235597    5168 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:52:04.235880    5168 sshutil.go:53] new ssh client: &{IP:172.27.114.192 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\force-systemd-env-378000\id_rsa Username:docker}
	I0116 03:52:04.347224    5168 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.1761497s)
	I0116 03:52:04.347224    5168 ssh_runner.go:235] Completed: cat /version.json: (5.1633023s)
	I0116 03:52:04.361699    5168 ssh_runner.go:195] Run: systemctl --version
	I0116 03:52:04.386689    5168 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 03:52:04.395059    5168 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:52:04.409242    5168 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:52:04.433522    5168 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:52:04.433651    5168 start.go:475] detecting cgroup driver to use...
	I0116 03:52:04.433727    5168 start.go:479] using "systemd" cgroup driver as enforced via flags
	I0116 03:52:04.433876    5168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:52:04.485179    5168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0116 03:52:04.513727    5168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 03:52:04.530613    5168 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0116 03:52:04.545823    5168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0116 03:52:04.585791    5168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 03:52:04.623290    5168 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 03:52:04.656216    5168 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 03:52:04.687613    5168 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:52:04.722343    5168 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 03:52:04.754112    5168 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:52:04.783339    5168 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:52:04.814056    5168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:52:04.990571    5168 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 03:52:05.023895    5168 start.go:475] detecting cgroup driver to use...
	I0116 03:52:05.023895    5168 start.go:479] using "systemd" cgroup driver as enforced via flags
	I0116 03:52:05.039036    5168 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0116 03:52:05.074508    5168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:52:05.110071    5168 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:52:05.157488    5168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:52:05.193484    5168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 03:52:05.227962    5168 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 03:52:05.287606    5168 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 03:52:05.307755    5168 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:52:05.355659    5168 ssh_runner.go:195] Run: which cri-dockerd
	I0116 03:52:05.374639    5168 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0116 03:52:05.389303    5168 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0116 03:52:05.432035    5168 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0116 03:52:05.621099    5168 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0116 03:52:05.783741    5168 docker.go:574] configuring docker to use "systemd" as cgroup driver...
	I0116 03:52:05.783741    5168 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0116 03:52:05.832190    5168 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:52:06.012961    5168 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0116 03:53:07.131996    5168 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1186313s)
	I0116 03:53:07.146062    5168 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0116 03:53:07.173538    5168 out.go:177] 
	W0116 03:53:07.173538    5168 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2024-01-16 03:50:38 UTC, ends at Tue 2024-01-16 03:53:07 UTC. --
	Jan 16 03:51:31 force-systemd-env-378000 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:31.798464962Z" level=info msg="Starting up"
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:31.801170200Z" level=info msg="containerd not running, starting managed containerd"
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:31.803644335Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=680
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.839751642Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.867268329Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.867490332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870073968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870160169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870455174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870557775Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870665776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870819579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870913180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871070482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871504488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871606090Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871625090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871775692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871869493Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871997695Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.872085796Z" level=info msg="metadata content store policy set" policy=shared
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063291031Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063465633Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063508133Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063551334Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063613635Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063630735Z" level=info msg="NRI interface is disabled by configuration."
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063664435Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063839338Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063951839Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064220843Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064244543Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064264743Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064295444Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064318944Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064335544Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064353045Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064369745Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064420045Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064434946Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064535347Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.067399785Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.067786190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068006893Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068162595Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068254196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068398698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068421898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068437598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068453499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068470399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068487299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068502599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068519399Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068586000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068605201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068619701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068640801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068658901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068692402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068706602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068720902Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068739002Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068753303Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068767103Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.069248909Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.069392311Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.069441012Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.069485512Z" level=info msg="containerd successfully booted in 0.231456s"
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:32.465205424Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:32.812504798Z" level=info msg="Loading containers: start."
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.024412616Z" level=info msg="Loading containers: done."
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.066880908Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.066996709Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.067010909Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.067018210Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.067051210Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.067226512Z" level=info msg="Daemon has completed initialization"
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.803199131Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.803265332Z" level=info msg="API listen on [::]:2376"
	Jan 16 03:51:34 force-systemd-env-378000 systemd[1]: Started Docker Application Container Engine.
	Jan 16 03:52:06 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:52:06.035177502Z" level=info msg="Processing signal 'terminated'"
	Jan 16 03:52:06 force-systemd-env-378000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 16 03:52:06 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:52:06.037325302Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 16 03:52:06 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:52:06.037790602Z" level=info msg="Daemon shutdown complete"
	Jan 16 03:52:06 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:52:06.037882602Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 16 03:52:06 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:52:06.038177502Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 16 03:52:07 force-systemd-env-378000 systemd[1]: docker.service: Succeeded.
	Jan 16 03:52:07 force-systemd-env-378000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 16 03:52:07 force-systemd-env-378000 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:52:07 force-systemd-env-378000 dockerd[1011]: time="2024-01-16T03:52:07.119086402Z" level=info msg="Starting up"
	Jan 16 03:53:07 force-systemd-env-378000 dockerd[1011]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 16 03:53:07 force-systemd-env-378000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 16 03:53:07 force-systemd-env-378000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 16 03:53:07 force-systemd-env-378000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2024-01-16 03:50:38 UTC, ends at Tue 2024-01-16 03:53:07 UTC. --
	Jan 16 03:51:31 force-systemd-env-378000 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:31.798464962Z" level=info msg="Starting up"
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:31.801170200Z" level=info msg="containerd not running, starting managed containerd"
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:31.803644335Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=680
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.839751642Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.867268329Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.867490332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870073968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870160169Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870455174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870557775Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870665776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870819579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.870913180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871070482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871504488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871606090Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871625090Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871775692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871869493Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.871997695Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 16 03:51:31 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:31.872085796Z" level=info msg="metadata content store policy set" policy=shared
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063291031Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063465633Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063508133Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063551334Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063613635Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063630735Z" level=info msg="NRI interface is disabled by configuration."
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063664435Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063839338Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.063951839Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064220843Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064244543Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064264743Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064295444Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064318944Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064335544Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064353045Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064369745Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064420045Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064434946Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.064535347Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.067399785Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.067786190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068006893Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068162595Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068254196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068398698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068421898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068437598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068453499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068470399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068487299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068502599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068519399Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068586000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068605201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068619701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068640801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068658901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068692402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068706602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068720902Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068739002Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068753303Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.068767103Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.069248909Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.069392311Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.069441012Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[680]: time="2024-01-16T03:51:32.069485512Z" level=info msg="containerd successfully booted in 0.231456s"
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:32.465205424Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 16 03:51:32 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:32.812504798Z" level=info msg="Loading containers: start."
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.024412616Z" level=info msg="Loading containers: done."
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.066880908Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.066996709Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.067010909Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.067018210Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.067051210Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.067226512Z" level=info msg="Daemon has completed initialization"
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.803199131Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 16 03:51:34 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:51:34.803265332Z" level=info msg="API listen on [::]:2376"
	Jan 16 03:51:34 force-systemd-env-378000 systemd[1]: Started Docker Application Container Engine.
	Jan 16 03:52:06 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:52:06.035177502Z" level=info msg="Processing signal 'terminated'"
	Jan 16 03:52:06 force-systemd-env-378000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 16 03:52:06 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:52:06.037325302Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 16 03:52:06 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:52:06.037790602Z" level=info msg="Daemon shutdown complete"
	Jan 16 03:52:06 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:52:06.037882602Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 16 03:52:06 force-systemd-env-378000 dockerd[674]: time="2024-01-16T03:52:06.038177502Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 16 03:52:07 force-systemd-env-378000 systemd[1]: docker.service: Succeeded.
	Jan 16 03:52:07 force-systemd-env-378000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 16 03:52:07 force-systemd-env-378000 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:52:07 force-systemd-env-378000 dockerd[1011]: time="2024-01-16T03:52:07.119086402Z" level=info msg="Starting up"
	Jan 16 03:53:07 force-systemd-env-378000 dockerd[1011]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 16 03:53:07 force-systemd-env-378000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 16 03:53:07 force-systemd-env-378000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 16 03:53:07 force-systemd-env-378000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0116 03:53:07.174514    5168 out.go:239] * 
	* 
	W0116 03:53:07.176515    5168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 03:53:07.176515    5168 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-env-378000 --memory=2048 --alsologtostderr -v=5 --driver=hyperv" : exit status 90
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-378000 ssh "docker info --format {{.CgroupDriver}}"
E0116 03:53:13.054596   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 03:53:46.627216   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-378000 ssh "docker info --format {{.CgroupDriver}}": (1m0.1131125s)
docker_test.go:115: expected systemd cgroup driver, got: 
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:53:07.574682   12692 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
panic.go:523: *** TestForceSystemdEnv FAILED at 2024-01-16 03:54:07.5583367 +0000 UTC m=+8261.489613601
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-378000 -n force-systemd-env-378000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-env-378000 -n force-systemd-env-378000: exit status 6 (13.1722287s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:54:07.695005    5644 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0116 03:54:20.661678    5644 status.go:415] kubeconfig endpoint: extract IP: "force-systemd-env-378000" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "force-systemd-env-378000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "force-systemd-env-378000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-378000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-378000: (1m1.4728804s)
--- FAIL: TestForceSystemdEnv (425.20s)

                                                
                                    
x
+
TestErrorSpam/setup (186.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-827200 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 --driver=hyperv
E0116 01:48:46.573024   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:48:46.588713   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:48:46.604271   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:48:46.635970   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:48:46.683328   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:48:46.777295   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:48:46.950013   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:48:47.281557   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:48:47.934688   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:48:49.221110   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:48:51.784265   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:48:56.909720   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:49:07.161452   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:49:27.650378   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 01:50:08.615026   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-827200 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 --driver=hyperv: (3m6.7614629s)
error_spam_test.go:96: unexpected stderr: "W0116 01:48:02.711485    3836 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-827200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
- KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
- MINIKUBE_LOCATION=17967
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting control plane node nospam-827200 in cluster nospam-827200
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-827200" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0116 01:48:02.711485    3836 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (186.76s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-833600 config unset cpus" to be -""- but got *"W0116 02:03:12.682408    6300 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-833600 config get cpus: exit status 14 (298.369ms)

                                                
                                                
** stderr ** 
	W0116 02:03:13.050331   12908 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-833600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0116 02:03:13.050331   12908 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-833600 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0116 02:03:13.332944    5296 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-833600 config get cpus" to be -""- but got *"W0116 02:03:13.631800    9864 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-833600 config unset cpus" to be -""- but got *"W0116 02:03:13.918651    5940 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-833600 config get cpus: exit status 14 (266.8945ms)

                                                
                                                
** stderr ** 
	W0116 02:03:14.220116    1436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-833600 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0116 02:03:14.220116    1436 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-833600 service --namespace=default --https --url hello-node: exit status 1 (15.0283316s)

                                                
                                                
** stderr ** 
	W0116 02:03:59.577648   11216 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1510: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-833600 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-833600 service hello-node --url --format={{.IP}}: exit status 1 (15.0409871s)

                                                
                                                
** stderr ** 
	W0116 02:04:14.594925    6072 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1541: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-833600 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1547: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-833600 service hello-node --url: exit status 1 (15.0498147s)

                                                
                                                
** stderr ** 
	W0116 02:04:29.653318     960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1560: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-833600 service hello-node --url": exit status 1
functional_test.go:1564: found endpoint for hello-node: 
functional_test.go:1572: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (209.93s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-499000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv
E0116 02:15:56.978143   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:18:13.015695   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:18:40.826176   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:18:46.589764   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-499000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv: exit status 90 (3m29.7272171s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-499000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node ingress-addon-legacy-499000 in cluster ingress-addon-legacy-499000
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating hyperv VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:15:49.967775    8732 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0116 02:15:50.040319    8732 out.go:296] Setting OutFile to fd 616 ...
	I0116 02:15:50.041040    8732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:15:50.041040    8732 out.go:309] Setting ErrFile to fd 708...
	I0116 02:15:50.041040    8732 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:15:50.065883    8732 out.go:303] Setting JSON to false
	I0116 02:15:50.068256    8732 start.go:128] hostinfo: {"hostname":"minikube3","uptime":49740,"bootTime":1705321609,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0116 02:15:50.068256    8732 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0116 02:15:50.069457    8732 out.go:177] * [ingress-addon-legacy-499000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0116 02:15:50.070427    8732 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 02:15:50.071145    8732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:15:50.071772    8732 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0116 02:15:50.070976    8732 notify.go:220] Checking for updates...
	I0116 02:15:50.072369    8732 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:15:50.073353    8732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:15:50.074667    8732 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:15:55.366797    8732 out.go:177] * Using the hyperv driver based on user configuration
	I0116 02:15:55.367577    8732 start.go:298] selected driver: hyperv
	I0116 02:15:55.368188    8732 start.go:902] validating driver "hyperv" against <nil>
	I0116 02:15:55.368188    8732 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:15:55.418403    8732 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:15:55.419808    8732 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:15:55.419980    8732 cni.go:84] Creating CNI manager for ""
	I0116 02:15:55.420040    8732 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0116 02:15:55.420075    8732 start_flags.go:321] config:
	{Name:ingress-addon-legacy-499000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-499000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:15:55.420122    8732 iso.go:125] acquiring lock: {Name:mk2c0b62d272a573835231fdc54419c800e07e34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:15:55.421958    8732 out.go:177] * Starting control plane node ingress-addon-legacy-499000 in cluster ingress-addon-legacy-499000
	I0116 02:15:55.422069    8732 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0116 02:15:55.463031    8732 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0116 02:15:55.463031    8732 cache.go:56] Caching tarball of preloaded images
	I0116 02:15:55.464257    8732 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0116 02:15:55.465319    8732 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0116 02:15:55.465569    8732 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0116 02:15:55.534134    8732 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0116 02:15:59.116897    8732 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0116 02:15:59.117483    8732 preload.go:256] verifying checksum of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0116 02:16:00.274526    8732 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0116 02:16:00.275752    8732 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-499000\config.json ...
	I0116 02:16:00.275752    8732 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-499000\config.json: {Name:mk5735b850d154af611edac552ba81bc34422dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:16:00.277581    8732 start.go:365] acquiring machines lock for ingress-addon-legacy-499000: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:16:00.277581    8732 start.go:369] acquired machines lock for "ingress-addon-legacy-499000" in 0s
	I0116 02:16:00.277581    8732 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-499000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-499000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0116 02:16:00.277581    8732 start.go:125] createHost starting for "" (driver="hyperv")
	I0116 02:16:00.278613    8732 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0116 02:16:00.278613    8732 start.go:159] libmachine.API.Create for "ingress-addon-legacy-499000" (driver="hyperv")
	I0116 02:16:00.278613    8732 client.go:168] LocalClient.Create starting
	I0116 02:16:00.279589    8732 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0116 02:16:00.279589    8732 main.go:141] libmachine: Decoding PEM data...
	I0116 02:16:00.279589    8732 main.go:141] libmachine: Parsing certificate...
	I0116 02:16:00.279589    8732 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0116 02:16:00.279589    8732 main.go:141] libmachine: Decoding PEM data...
	I0116 02:16:00.279589    8732 main.go:141] libmachine: Parsing certificate...
	I0116 02:16:00.280586    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0116 02:16:02.417713    8732 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0116 02:16:02.417713    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:02.417713    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0116 02:16:04.203868    8732 main.go:141] libmachine: [stdout =====>] : False
	
	I0116 02:16:04.204114    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:04.204114    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0116 02:16:05.700669    8732 main.go:141] libmachine: [stdout =====>] : True
	
	I0116 02:16:05.700669    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:05.700782    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0116 02:16:09.222285    8732 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0116 02:16:09.222285    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:09.224806    8732 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 02:16:09.636527    8732 main.go:141] libmachine: Creating SSH key...
	I0116 02:16:10.082632    8732 main.go:141] libmachine: Creating VM...
	I0116 02:16:10.082632    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0116 02:16:12.889047    8732 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0116 02:16:12.889187    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:12.889187    8732 main.go:141] libmachine: Using switch "Default Switch"
	I0116 02:16:12.889187    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0116 02:16:14.679157    8732 main.go:141] libmachine: [stdout =====>] : True
	
	I0116 02:16:14.679157    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:14.679241    8732 main.go:141] libmachine: Creating VHD
	I0116 02:16:14.679304    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-499000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0116 02:16:18.437573    8732 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-49900
	                          0\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4A1FE022-47F5-443E-8D4B-BB6B891C7E8A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0116 02:16:18.437573    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:18.437695    8732 main.go:141] libmachine: Writing magic tar header
	I0116 02:16:18.437809    8732 main.go:141] libmachine: Writing SSH key tar header
	I0116 02:16:18.447010    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-499000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-499000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0116 02:16:21.597196    8732 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:16:21.597455    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:21.597534    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-499000\disk.vhd' -SizeBytes 20000MB
	I0116 02:16:24.076946    8732 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:16:24.076946    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:24.076946    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ingress-addon-legacy-499000 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-499000' -SwitchName 'Default Switch' -MemoryStartupBytes 4096MB
	I0116 02:16:27.640191    8732 main.go:141] libmachine: [stdout =====>] : 
	Name                        State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                        ----- ----------- ----------------- ------   ------             -------
	ingress-addon-legacy-499000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0116 02:16:27.640273    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:27.640273    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ingress-addon-legacy-499000 -DynamicMemoryEnabled $false
	I0116 02:16:29.778638    8732 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:16:29.778730    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:29.778730    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ingress-addon-legacy-499000 -Count 2
	I0116 02:16:31.887574    8732 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:16:31.887615    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:31.887615    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ingress-addon-legacy-499000 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-499000\boot2docker.iso'
	I0116 02:16:34.432501    8732 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:16:34.432689    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:34.432689    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ingress-addon-legacy-499000 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-499000\disk.vhd'
	I0116 02:16:37.038461    8732 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:16:37.038461    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:37.038461    8732 main.go:141] libmachine: Starting VM...
	I0116 02:16:37.038461    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ingress-addon-legacy-499000
	I0116 02:16:39.841727    8732 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:16:39.841803    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:39.841803    8732 main.go:141] libmachine: Waiting for host to start...
	I0116 02:16:39.841873    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:16:42.024036    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:16:42.024337    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:42.024337    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:16:44.431895    8732 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:16:44.431953    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:45.434553    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:16:47.601022    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:16:47.601022    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:47.601253    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:16:50.070227    8732 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:16:50.070291    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:51.074965    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:16:53.222335    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:16:53.222424    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:53.222424    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:16:55.694427    8732 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:16:55.694427    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:56.698945    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:16:58.852329    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:16:58.852384    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:16:58.852452    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:17:01.313841    8732 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:17:01.313841    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:02.315609    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:17:04.486372    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:17:04.486711    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:04.486746    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:17:07.018570    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:17:07.018570    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:07.018570    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:17:09.078092    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:17:09.078337    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:09.078337    8732 machine.go:88] provisioning docker machine ...
	I0116 02:17:09.078426    8732 buildroot.go:166] provisioning hostname "ingress-addon-legacy-499000"
	I0116 02:17:09.078560    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:17:11.193334    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:17:11.193334    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:11.193334    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:17:13.691984    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:17:13.692161    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:13.697562    8732 main.go:141] libmachine: Using SSH client type: native
	I0116 02:17:13.708071    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.124.201 22 <nil> <nil>}
	I0116 02:17:13.708071    8732 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-499000 && echo "ingress-addon-legacy-499000" | sudo tee /etc/hostname
	I0116 02:17:13.874544    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-499000
	
	I0116 02:17:13.874544    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:17:15.970360    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:17:15.970360    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:15.970360    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:17:18.437191    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:17:18.437191    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:18.443522    8732 main.go:141] libmachine: Using SSH client type: native
	I0116 02:17:18.444309    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.124.201 22 <nil> <nil>}
	I0116 02:17:18.444309    8732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-499000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-499000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-499000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:17:18.611512    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:17:18.611512    8732 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0116 02:17:18.611512    8732 buildroot.go:174] setting up certificates
	I0116 02:17:18.611512    8732 provision.go:83] configureAuth start
	I0116 02:17:18.612184    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:17:20.665446    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:17:20.665446    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:20.665576    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:17:23.136170    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:17:23.136170    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:23.136170    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:17:25.257752    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:17:25.257752    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:25.257752    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:17:27.763637    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:17:27.763637    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:27.763748    8732 provision.go:138] copyHostCerts
	I0116 02:17:27.764211    8732 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0116 02:17:27.764334    8732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0116 02:17:27.764334    8732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0116 02:17:27.764932    8732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0116 02:17:27.766344    8732 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0116 02:17:27.766612    8732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0116 02:17:27.766697    8732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0116 02:17:27.767022    8732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0116 02:17:27.768551    8732 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0116 02:17:27.768886    8732 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0116 02:17:27.768886    8732 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0116 02:17:27.768886    8732 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0116 02:17:27.769959    8732 provision.go:112] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ingress-addon-legacy-499000 san=[172.27.124.201 172.27.124.201 localhost 127.0.0.1 minikube ingress-addon-legacy-499000]
	I0116 02:17:28.109202    8732 provision.go:172] copyRemoteCerts
	I0116 02:17:28.123100    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:17:28.123100    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:17:30.186197    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:17:30.186517    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:30.186517    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:17:32.639648    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:17:32.639648    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:32.639648    8732 sshutil.go:53] new ssh client: &{IP:172.27.124.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-499000\id_rsa Username:docker}
	I0116 02:17:32.750483    8732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6273529s)
	I0116 02:17:32.750667    8732 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0116 02:17:32.751204    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 02:17:32.789646    8732 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0116 02:17:32.789738    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1253 bytes)
	I0116 02:17:32.829467    8732 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0116 02:17:32.829467    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 02:17:32.864519    8732 provision.go:86] duration metric: configureAuth took 14.2529128s
	I0116 02:17:32.864519    8732 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:17:32.865387    8732 config.go:182] Loaded profile config "ingress-addon-legacy-499000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0116 02:17:32.865471    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:17:34.979060    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:17:34.979310    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:34.979528    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:17:37.557865    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:17:37.558050    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:37.565466    8732 main.go:141] libmachine: Using SSH client type: native
	I0116 02:17:37.566572    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.124.201 22 <nil> <nil>}
	I0116 02:17:37.566572    8732 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0116 02:17:37.707065    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0116 02:17:37.707182    8732 buildroot.go:70] root file system type: tmpfs
	I0116 02:17:37.707370    8732 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0116 02:17:37.707370    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:17:39.845727    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:17:39.845727    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:39.845798    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:17:42.338781    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:17:42.339002    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:42.344765    8732 main.go:141] libmachine: Using SSH client type: native
	I0116 02:17:42.345526    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.124.201 22 <nil> <nil>}
	I0116 02:17:42.345526    8732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0116 02:17:42.505498    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0116 02:17:42.505498    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:17:44.625903    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:17:44.626295    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:44.626295    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:17:47.144704    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:17:47.144704    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:47.151206    8732 main.go:141] libmachine: Using SSH client type: native
	I0116 02:17:47.152503    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.124.201 22 <nil> <nil>}
	I0116 02:17:47.152503    8732 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0116 02:17:48.134740    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0116 02:17:48.134801    8732 machine.go:91] provisioned docker machine in 39.0562068s
	I0116 02:17:48.134859    8732 client.go:171] LocalClient.Create took 1m47.8555345s
	I0116 02:17:48.134917    8732 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-499000" took 1m47.8555919s
	I0116 02:17:48.134917    8732 start.go:300] post-start starting for "ingress-addon-legacy-499000" (driver="hyperv")
	I0116 02:17:48.135002    8732 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:17:48.149374    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:17:48.149374    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:17:50.217044    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:17:50.217131    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:50.217131    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:17:52.690798    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:17:52.690977    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:52.691145    8732 sshutil.go:53] new ssh client: &{IP:172.27.124.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-499000\id_rsa Username:docker}
	I0116 02:17:52.800390    8732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6508889s)
	I0116 02:17:52.815217    8732 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:17:52.822854    8732 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:17:52.823043    8732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0116 02:17:52.823582    8732 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0116 02:17:52.824962    8732 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> 135082.pem in /etc/ssl/certs
	I0116 02:17:52.824962    8732 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> /etc/ssl/certs/135082.pem
	I0116 02:17:52.838271    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:17:52.854422    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem --> /etc/ssl/certs/135082.pem (1708 bytes)
	I0116 02:17:52.890835    8732 start.go:303] post-start completed in 4.7558017s
	I0116 02:17:52.893902    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:17:55.019225    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:17:55.019225    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:55.019225    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:17:57.581235    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:17:57.581494    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:57.581754    8732 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ingress-addon-legacy-499000\config.json ...
	I0116 02:17:57.585005    8732 start.go:128] duration metric: createHost completed in 1m57.3066496s
	I0116 02:17:57.585079    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:17:59.669553    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:17:59.669553    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:17:59.669553    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:18:02.250703    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:18:02.250750    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:18:02.255262    8732 main.go:141] libmachine: Using SSH client type: native
	I0116 02:18:02.255936    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.124.201 22 <nil> <nil>}
	I0116 02:18:02.255936    8732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0116 02:18:02.398224    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705371482.398262259
	
	I0116 02:18:02.398772    8732 fix.go:206] guest clock: 1705371482.398262259
	I0116 02:18:02.398772    8732 fix.go:219] Guest: 2024-01-16 02:18:02.398262259 +0000 UTC Remote: 2024-01-16 02:17:57.5850796 +0000 UTC m=+127.728926101 (delta=4.813182659s)
	I0116 02:18:02.398928    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:18:04.497688    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:18:04.497899    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:18:04.497997    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:18:06.996000    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:18:06.996000    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:18:07.002641    8732 main.go:141] libmachine: Using SSH client type: native
	I0116 02:18:07.003488    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.124.201 22 <nil> <nil>}
	I0116 02:18:07.003488    8732 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705371482
	I0116 02:18:07.151564    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan 16 02:18:02 UTC 2024
	
	I0116 02:18:07.151564    8732 fix.go:226] clock set: Tue Jan 16 02:18:02 UTC 2024
	 (err=<nil>)
	I0116 02:18:07.151631    8732 start.go:83] releasing machines lock for "ingress-addon-legacy-499000", held for 2m6.8731449s
	I0116 02:18:07.151859    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:18:09.239244    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:18:09.239244    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:18:09.239244    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:18:11.741415    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:18:11.741415    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:18:11.746853    8732 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:18:11.747073    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:18:11.759799    8732 ssh_runner.go:195] Run: cat /version.json
	I0116 02:18:11.759799    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:18:13.937569    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:18:13.937569    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:18:13.937569    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:18:13.937569    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:18:13.937782    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:18:13.937782    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:18:16.532500    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:18:16.532500    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:18:16.532717    8732 sshutil.go:53] new ssh client: &{IP:172.27.124.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-499000\id_rsa Username:docker}
	I0116 02:18:16.553259    8732 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:18:16.553491    8732 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:18:16.553617    8732 sshutil.go:53] new ssh client: &{IP:172.27.124.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-499000\id_rsa Username:docker}
	I0116 02:18:16.742380    8732 ssh_runner.go:235] Completed: cat /version.json: (4.9824541s)
	I0116 02:18:16.742380    8732 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9953916s)
	I0116 02:18:16.757195    8732 ssh_runner.go:195] Run: systemctl --version
	I0116 02:18:16.777825    8732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0116 02:18:16.785151    8732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:18:16.797565    8732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0116 02:18:16.827864    8732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0116 02:18:16.852393    8732 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 02:18:16.852393    8732 start.go:475] detecting cgroup driver to use...
	I0116 02:18:16.852760    8732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:18:16.896815    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0116 02:18:16.927770    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 02:18:16.942526    8732 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0116 02:18:16.956237    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0116 02:18:16.984399    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 02:18:17.014446    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 02:18:17.043240    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 02:18:17.074441    8732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:18:17.103112    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 02:18:17.132027    8732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:18:17.161501    8732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:18:17.188243    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:18:17.352037    8732 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 02:18:17.375668    8732 start.go:475] detecting cgroup driver to use...
	I0116 02:18:17.390678    8732 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0116 02:18:17.422868    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:18:17.459705    8732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:18:17.506130    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:18:17.539022    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 02:18:17.576094    8732 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 02:18:17.630374    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 02:18:17.648582    8732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:18:17.692106    8732 ssh_runner.go:195] Run: which cri-dockerd
	I0116 02:18:17.710191    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0116 02:18:17.724322    8732 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0116 02:18:17.768333    8732 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0116 02:18:17.943764    8732 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0116 02:18:18.111639    8732 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0116 02:18:18.111955    8732 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0116 02:18:18.151849    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:18:18.307259    8732 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0116 02:19:19.422662    8732 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1148645s)
	I0116 02:19:19.438201    8732 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0116 02:19:19.466385    8732 out.go:177] 
	W0116 02:19:19.467601    8732 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2024-01-16 02:16:57 UTC, ends at Tue 2024-01-16 02:19:19 UTC. --
	Jan 16 02:17:47 ingress-addon-legacy-499000 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:47.705231914Z" level=info msg="Starting up"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:47.706075121Z" level=info msg="containerd not running, starting managed containerd"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:47.707167831Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=689
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.746360062Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.772148180Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.772245781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.774460600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.774580701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.774912204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.775042605Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.775324007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.775398508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.775416108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.775655810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776027113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776155914Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776175014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776324916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776415616Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776484317Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776627118Z" level=info msg="metadata content store policy set" policy=shared
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787405109Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787598511Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787619511Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787668911Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787721112Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787732012Z" level=info msg="NRI interface is disabled by configuration."
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787749312Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787871213Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787908014Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787928514Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787942014Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787957514Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787973614Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787987714Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788000714Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788013814Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788028015Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788039915Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788050915Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788218716Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788669420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788721320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788739921Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788762221Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788815421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788848321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788864222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788876022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788889322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788901922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788914522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788927322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788941522Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788998323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789015123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789027023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789039823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789053623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789067323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789080523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789184724Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789320125Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789337526Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789349726Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789672528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789930131Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789987531Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.790006731Z" level=info msg="containerd successfully booted in 0.047122s"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:47.823371813Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:47.840035754Z" level=info msg="Loading containers: start."
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.059529480Z" level=info msg="Loading containers: done."
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.079301037Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.079384137Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.079430738Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.079468438Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.079524038Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.079731140Z" level=info msg="Daemon has completed initialization"
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.132880562Z" level=info msg="API listen on [::]:2376"
	Jan 16 02:17:48 ingress-addon-legacy-499000 systemd[1]: Started Docker Application Container Engine.
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.133260065Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 16 02:18:18 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:18:18.328634509Z" level=info msg="Processing signal 'terminated'"
	Jan 16 02:18:18 ingress-addon-legacy-499000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 16 02:18:18 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:18:18.330062609Z" level=info msg="Daemon shutdown complete"
	Jan 16 02:18:18 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:18:18.330180409Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 16 02:18:18 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:18:18.330305309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 16 02:18:18 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:18:18.330404209Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 16 02:18:19 ingress-addon-legacy-499000 systemd[1]: docker.service: Succeeded.
	Jan 16 02:18:19 ingress-addon-legacy-499000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 16 02:18:19 ingress-addon-legacy-499000 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 02:18:19 ingress-addon-legacy-499000 dockerd[1061]: time="2024-01-16T02:18:19.405170109Z" level=info msg="Starting up"
	Jan 16 02:19:19 ingress-addon-legacy-499000 dockerd[1061]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 16 02:19:19 ingress-addon-legacy-499000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 16 02:19:19 ingress-addon-legacy-499000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 16 02:19:19 ingress-addon-legacy-499000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2024-01-16 02:16:57 UTC, ends at Tue 2024-01-16 02:19:19 UTC. --
	Jan 16 02:17:47 ingress-addon-legacy-499000 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:47.705231914Z" level=info msg="Starting up"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:47.706075121Z" level=info msg="containerd not running, starting managed containerd"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:47.707167831Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=689
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.746360062Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.772148180Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.772245781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.774460600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.774580701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.774912204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.775042605Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.775324007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.775398508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.775416108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.775655810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776027113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776155914Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776175014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776324916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776415616Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776484317Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.776627118Z" level=info msg="metadata content store policy set" policy=shared
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787405109Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787598511Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787619511Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787668911Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787721112Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787732012Z" level=info msg="NRI interface is disabled by configuration."
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787749312Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787871213Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787908014Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787928514Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787942014Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787957514Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787973614Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.787987714Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788000714Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788013814Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788028015Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788039915Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788050915Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788218716Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788669420Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788721320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788739921Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788762221Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788815421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788848321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788864222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788876022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788889322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788901922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788914522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788927322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788941522Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.788998323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789015123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789027023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789039823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789053623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789067323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789080523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789184724Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789320125Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789337526Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789349726Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789672528Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789930131Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.789987531Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[689]: time="2024-01-16T02:17:47.790006731Z" level=info msg="containerd successfully booted in 0.047122s"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:47.823371813Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 16 02:17:47 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:47.840035754Z" level=info msg="Loading containers: start."
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.059529480Z" level=info msg="Loading containers: done."
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.079301037Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.079384137Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.079430738Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.079468438Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.079524038Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.079731140Z" level=info msg="Daemon has completed initialization"
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.132880562Z" level=info msg="API listen on [::]:2376"
	Jan 16 02:17:48 ingress-addon-legacy-499000 systemd[1]: Started Docker Application Container Engine.
	Jan 16 02:17:48 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:17:48.133260065Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 16 02:18:18 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:18:18.328634509Z" level=info msg="Processing signal 'terminated'"
	Jan 16 02:18:18 ingress-addon-legacy-499000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 16 02:18:18 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:18:18.330062609Z" level=info msg="Daemon shutdown complete"
	Jan 16 02:18:18 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:18:18.330180409Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 16 02:18:18 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:18:18.330305309Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 16 02:18:18 ingress-addon-legacy-499000 dockerd[683]: time="2024-01-16T02:18:18.330404209Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jan 16 02:18:19 ingress-addon-legacy-499000 systemd[1]: docker.service: Succeeded.
	Jan 16 02:18:19 ingress-addon-legacy-499000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 16 02:18:19 ingress-addon-legacy-499000 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 02:18:19 ingress-addon-legacy-499000 dockerd[1061]: time="2024-01-16T02:18:19.405170109Z" level=info msg="Starting up"
	Jan 16 02:19:19 ingress-addon-legacy-499000 dockerd[1061]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 16 02:19:19 ingress-addon-legacy-499000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 16 02:19:19 ingress-addon-legacy-499000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 16 02:19:19 ingress-addon-legacy-499000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0116 02:19:19.467774    8732 out.go:239] * 
	* 
	W0116 02:19:19.468214    8732 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 02:19:19.470073    8732 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p ingress-addon-legacy-499000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv" : exit status 90
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (209.93s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (71.91s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-499000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ingress-addon-legacy-499000 addons enable ingress --alsologtostderr -v=5: exit status 11 (1m0.118465s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:19:19.894281   12076 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0116 02:19:19.976811   12076 out.go:296] Setting OutFile to fd 980 ...
	I0116 02:19:19.993476   12076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:19:19.993572   12076 out.go:309] Setting ErrFile to fd 912...
	I0116 02:19:19.993572   12076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:19:20.009696   12076 mustload.go:65] Loading cluster: ingress-addon-legacy-499000
	I0116 02:19:20.010825   12076 config.go:182] Loaded profile config "ingress-addon-legacy-499000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0116 02:19:20.010825   12076 addons.go:597] checking whether the cluster is paused
	I0116 02:19:20.010825   12076 config.go:182] Loaded profile config "ingress-addon-legacy-499000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0116 02:19:20.011859   12076 host.go:66] Checking if "ingress-addon-legacy-499000" exists ...
	I0116 02:19:20.012082   12076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:19:22.172613   12076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:19:22.172613   12076 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:19:22.187007   12076 ssh_runner.go:195] Run: systemctl --version
	I0116 02:19:22.187007   12076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:19:24.322264   12076 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:19:24.322264   12076 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:19:24.322264   12076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:19:26.794556   12076 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:19:26.794778   12076 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:19:26.795085   12076 sshutil.go:53] new ssh client: &{IP:172.27.124.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-499000\id_rsa Username:docker}
	I0116 02:19:26.896733   12076 ssh_runner.go:235] Completed: systemctl --version: (4.7096942s)
	I0116 02:19:26.910651   12076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0116 02:20:19.815700   12076 ssh_runner.go:235] Completed: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}: (52.9047002s)
	I0116 02:20:19.818169   12076 out.go:177] 
	W0116 02:20:19.818712   12076 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	W0116 02:20:19.818712   12076 out.go:239] * 
	* 
	W0116 02:20:19.849328   12076 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_addons_b2d105476118f14415115fac71674fdf7118bd0c_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_addons_b2d105476118f14415115fac71674fdf7118bd0c_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 02:20:19.851160   12076 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-499000 -n ingress-addon-legacy-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-499000 -n ingress-addon-legacy-499000: exit status 6 (11.7945355s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:20:20.005615   12364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0116 02:20:31.613204   12364 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-499000" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-499000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (71.91s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (60.3s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-499000 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ingress-addon-legacy-499000 addons enable ingress-dns --alsologtostderr -v=5: exit status 11 (48.4479357s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:20:31.798660    4684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0116 02:20:31.888373    4684 out.go:296] Setting OutFile to fd 912 ...
	I0116 02:20:31.908768    4684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:20:31.908835    4684 out.go:309] Setting ErrFile to fd 840...
	I0116 02:20:31.908835    4684 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:20:31.923763    4684 mustload.go:65] Loading cluster: ingress-addon-legacy-499000
	I0116 02:20:31.924763    4684 config.go:182] Loaded profile config "ingress-addon-legacy-499000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0116 02:20:31.924763    4684 addons.go:597] checking whether the cluster is paused
	I0116 02:20:31.924763    4684 config.go:182] Loaded profile config "ingress-addon-legacy-499000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0116 02:20:31.924763    4684 host.go:66] Checking if "ingress-addon-legacy-499000" exists ...
	I0116 02:20:31.925887    4684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:20:34.052434    4684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:20:34.052529    4684 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:20:34.071531    4684 ssh_runner.go:195] Run: systemctl --version
	I0116 02:20:34.071531    4684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ingress-addon-legacy-499000 ).state
	I0116 02:20:36.185406    4684 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:20:36.185496    4684 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:20:36.185496    4684 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ingress-addon-legacy-499000 ).networkadapters[0]).ipaddresses[0]
	I0116 02:20:38.713781    4684 main.go:141] libmachine: [stdout =====>] : 172.27.124.201
	
	I0116 02:20:38.713781    4684 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:20:38.713781    4684 sshutil.go:53] new ssh client: &{IP:172.27.124.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ingress-addon-legacy-499000\id_rsa Username:docker}
	I0116 02:20:38.815825    4684 ssh_runner.go:235] Completed: systemctl --version: (4.7442628s)
	I0116 02:20:38.826129    4684 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0116 02:21:20.048289    4684 ssh_runner.go:235] Completed: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}: (41.2207073s)
	I0116 02:21:20.050737    4684 out.go:177] 
	W0116 02:21:20.051524    4684 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	W0116 02:21:20.051592    4684 out.go:239] * 
	* 
	W0116 02:21:20.079586    4684 out.go:239] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_addons_8682f12bcaa29a4882725c600aef941ade248be8_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_addons_8682f12bcaa29a4882725c600aef941ade248be8_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0116 02:21:20.081804    4684 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-499000 -n ingress-addon-legacy-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-499000 -n ingress-addon-legacy-499000: exit status 6 (11.8505829s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:21:20.256099    2664 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0116 02:21:31.909084    2664 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-499000" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-499000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (60.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (11.88s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:201: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-499000 -n ingress-addon-legacy-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p ingress-addon-legacy-499000 -n ingress-addon-legacy-499000: exit status 6 (11.8830648s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:21:32.096830   13024 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0116 02:21:43.793206   13024 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-499000" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-499000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (11.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (204.77s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-556300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0116 02:23:13.006791   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:23:46.599344   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-556300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: exit status 90 (3m24.7644115s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6abf3eb8-40b7-4e0c-9252-b98db8a32447","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-556300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bfc149fe-14be-4572-ac0b-40a8980afa11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube3\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"0e01292f-7b84-45c7-9b5c-1fdb0831c335","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"36090647-03ff-45fd-bc4d-967a63d0158b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"ff69ddbd-d0e7-4720-a27f-ed990a4695bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17967"}}
	{"specversion":"1.0","id":"3a0463d7-62e0-4efb-b24e-6f02ac645ae4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1c335303-d66b-4b21-8a2a-a08aa8c2f38b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the hyperv driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c93c2e36-f37f-43d5-b221-ec416025fe44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node json-output-556300 in cluster json-output-556300","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f599047a-a3b1-497a-bec5-ee21e8210226","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a8b7f11-feeb-4764-ad63-dd712cfbb642","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"90","issues":"","message":"Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1\nstdout:\n\nstderr:\nJob for docker.service failed because the control process exited with error code.\nSee \"systemctl status docker.service\" and \"journalctl -xe\" for details.\n\nsudo journalctl --no-pager -u docker:\n-- stdout --\n-- Journal begins at Tue 2024-01-16 02:23:35 UTC, ends at Tue 2024-01-16 02:25:57 UTC. --\nJan 16 02:24:25 json-output-556300 systemd[1]: Starting Docker Application Container Engine...\nJan 16 02:24:26 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:26.058702365Z\" level=info msg=\"Starting up\"\nJan 16 02:24:26 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:26.060001090Z\" level=info msg=\"containerd not running, startin
g managed containerd\"\nJan 16 02:24:26 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:26.062016752Z\" level=info msg=\"started new containerd process\" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=682\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.097229865Z\" level=info msg=\"starting containerd\" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.122067474Z\" level=info msg=\"loading plugin \\\"io.containerd.warning.v1.deprecations\\\"...\" type=io.containerd.warning.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.122236313Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" type=io.containerd.snapshotter.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.124926028Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.aufs\\\"...\" error=
\"aufs is not supported (modprobe aufs failed: exit status 1 \\\"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\\\n\\\"): skip plugin\" type=io.containerd.snapshotter.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.125031990Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.btrfs\\\"...\" type=io.containerd.snapshotter.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.125470329Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.btrfs\\\"...\" error=\"path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.125580389Z\" level=info msg=\"loading plugin \\\"io.containerd.content.v1.content\\\"...\" type=io.containerd.content.v1\nJan 16 02:24:26 json-output-556300 dockerd[682
]: time=\"2024-01-16T02:24:26.125678253Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.blockfile\\\"...\" type=io.containerd.snapshotter.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.125833496Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.blockfile\\\"...\" error=\"no scratch file generator: skip plugin\" type=io.containerd.snapshotter.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.125926362Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.native\\\"...\" type=io.containerd.snapshotter.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.126022127Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.overlayfs\\\"...\" type=io.containerd.snapshotter.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.126612111Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.devmapper\\\"...\" type=io.contai
nerd.snapshotter.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.126709676Z\" level=warning msg=\"failed to load plugin io.containerd.snapshotter.v1.devmapper\" error=\"devmapper not configured\"\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.126727869Z\" level=info msg=\"loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" type=io.containerd.snapshotter.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.126913401Z\" level=info msg=\"skip loading plugin \\\"io.containerd.snapshotter.v1.zfs\\\"...\" error=\"path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin\" type=io.containerd.snapshotter.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.127006667Z\" level=info msg=\"loading plugin \\\"io.containerd.metadata.v1.bolt\\\"...\" type=io.containerd.metadata.v1\nJan 16 02:24:26 json-output-55630
0 dockerd[682]: time=\"2024-01-16T02:24:26.127076741Z\" level=warning msg=\"could not use snapshotter devmapper in metadata plugin\" error=\"devmapper not configured\"\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.127242580Z\" level=info msg=\"metadata content store policy set\" policy=shared\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.138378705Z\" level=info msg=\"loading plugin \\\"io.containerd.differ.v1.walking\\\"...\" type=io.containerd.differ.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.138524252Z\" level=info msg=\"loading plugin \\\"io.containerd.event.v1.exchange\\\"...\" type=io.containerd.event.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.138600124Z\" level=info msg=\"loading plugin \\\"io.containerd.gc.v1.scheduler\\\"...\" type=io.containerd.gc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.138667799Z\" level=info msg=\"loading plu
gin \\\"io.containerd.lease.v1.manager\\\"...\" type=io.containerd.lease.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.138846534Z\" level=info msg=\"loading plugin \\\"io.containerd.nri.v1.nri\\\"...\" type=io.containerd.nri.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.138946697Z\" level=info msg=\"NRI interface is disabled by configuration.\"\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.138975986Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v2.task\\\"...\" type=io.containerd.runtime.v2\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.139092844Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v2.shim\\\"...\" type=io.containerd.runtime.v2\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.139251385Z\" level=info msg=\"loading plugin \\\"io.containerd.sandbox.store.v1.local\\\"...\" type=io.containerd.sandbox.store.v1\nJan 16 02:
24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.139274777Z\" level=info msg=\"loading plugin \\\"io.containerd.sandbox.controller.v1.local\\\"...\" type=io.containerd.sandbox.controller.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.139298268Z\" level=info msg=\"loading plugin \\\"io.containerd.streaming.v1.manager\\\"...\" type=io.containerd.streaming.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.139316662Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.introspection-service\\\"...\" type=io.containerd.service.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.139336354Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.containers-service\\\"...\" type=io.containerd.service.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.139352249Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.content-service\\\"...\" type=io.containerd
.service.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.139366943Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.diff-service\\\"...\" type=io.containerd.service.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.139384137Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.images-service\\\"...\" type=io.containerd.service.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.139400031Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.namespaces-service\\\"...\" type=io.containerd.service.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.139415425Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.snapshots-service\\\"...\" type=io.containerd.service.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.139428821Z\" level=info msg=\"loading plugin \\\"io.containerd.runtime.v1.linux\\\"...\" type=io.containerd.ru
ntime.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.139609554Z\" level=info msg=\"loading plugin \\\"io.containerd.monitor.v1.cgroups\\\"...\" type=io.containerd.monitor.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140400765Z\" level=info msg=\"loading plugin \\\"io.containerd.service.v1.tasks-service\\\"...\" type=io.containerd.service.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140608689Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.introspection\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140636978Z\" level=info msg=\"loading plugin \\\"io.containerd.transfer.v1.local\\\"...\" type=io.containerd.transfer.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140665968Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.restart\\\"...\" type=io.containerd.internal.v1\nJan 16 02:24:26
json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140757734Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.containers\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140776227Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.content\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140790722Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.diff\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140804117Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.events\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140818612Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.healthcheck\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.1
40833606Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.images\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140846602Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.leases\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140859997Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.namespaces\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140880889Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.opt\\\"...\" type=io.containerd.internal.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140942766Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.sandbox-controllers\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140962959Z\" level=info msg=\"loading plugin \\\
"io.containerd.grpc.v1.sandboxes\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140978054Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.snapshots\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.140991948Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.streaming\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.141011241Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.tasks\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.141027136Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.transfer\\\"...\" type=io.containerd.grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.141041130Z\" level=info msg=\"loading plugin \\\"io.containerd.grpc.v1.version\\\"...\" type=io.containerd.
grpc.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.141058224Z\" level=info msg=\"loading plugin \\\"io.containerd.tracing.processor.v1.otlp\\\"...\" type=io.containerd.tracing.processor.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.141074918Z\" level=info msg=\"skip loading plugin \\\"io.containerd.tracing.processor.v1.otlp\\\"...\" error=\"no OpenTelemetry endpoint: skip plugin\" type=io.containerd.tracing.processor.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.141088413Z\" level=info msg=\"loading plugin \\\"io.containerd.internal.v1.tracing\\\"...\" type=io.containerd.internal.v1\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.141100509Z\" level=info msg=\"skipping tracing processor initialization (no tracing plugin)\" error=\"no OpenTelemetry endpoint: skip plugin\"\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.142082449Z\" level=info msg=serving
... address=/var/run/docker/containerd/containerd-debug.sock\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.142285075Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.142436320Z\" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock\nJan 16 02:24:26 json-output-556300 dockerd[682]: time=\"2024-01-16T02:24:26.142762401Z\" level=info msg=\"containerd successfully booted in 0.048132s\"\nJan 16 02:24:26 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:26.188019237Z\" level=info msg=\"[graphdriver] trying configured driver: overlay2\"\nJan 16 02:24:26 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:26.200408603Z\" level=info msg=\"Loading containers: start.\"\nJan 16 02:24:26 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:26.401569182Z\" level=info msg=\"Loading containers: done.\"\nJan 16 02:24:26 json-output-556300 docke
rd[676]: time=\"2024-01-16T02:24:26.421512283Z\" level=warning msg=\"WARNING: No blkio throttle.read_bps_device support\"\nJan 16 02:24:26 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:26.421538173Z\" level=warning msg=\"WARNING: No blkio throttle.write_bps_device support\"\nJan 16 02:24:26 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:26.421545271Z\" level=warning msg=\"WARNING: No blkio throttle.read_iops_device support\"\nJan 16 02:24:26 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:26.421554168Z\" level=warning msg=\"WARNING: No blkio throttle.write_iops_device support\"\nJan 16 02:24:26 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:26.421575160Z\" level=info msg=\"Docker daemon\" commit=311b9ff graphdriver=overlay2 version=24.0.7\nJan 16 02:24:26 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:26.421738100Z\" level=info msg=\"Daemon has completed initialization\"\nJan 16 02:24:26 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:26.473231954Z\"
level=info msg=\"API listen on /var/run/docker.sock\"\nJan 16 02:24:26 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:26.473316523Z\" level=info msg=\"API listen on [::]:2376\"\nJan 16 02:24:26 json-output-556300 systemd[1]: Started Docker Application Container Engine.\nJan 16 02:24:56 json-output-556300 systemd[1]: Stopping Docker Application Container Engine...\nJan 16 02:24:56 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:56.839484834Z\" level=info msg=\"Processing signal 'terminated'\"\nJan 16 02:24:56 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:56.841603534Z\" level=info msg=\"Daemon shutdown complete\"\nJan 16 02:24:56 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:56.841614734Z\" level=info msg=\"stopping event stream following graceful shutdown\" error=\"\u003cnil\u003e\" module=libcontainerd namespace=moby\nJan 16 02:24:56 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:56.841695034Z\" level=info msg=\"stopping healthcheck following graceful shutdo
wn\" module=libcontainerd\nJan 16 02:24:56 json-output-556300 dockerd[676]: time=\"2024-01-16T02:24:56.841996334Z\" level=info msg=\"stopping event stream following graceful shutdown\" error=\"context canceled\" module=libcontainerd namespace=plugins.moby\nJan 16 02:24:57 json-output-556300 systemd[1]: docker.service: Succeeded.\nJan 16 02:24:57 json-output-556300 systemd[1]: Stopped Docker Application Container Engine.\nJan 16 02:24:57 json-output-556300 systemd[1]: Starting Docker Application Container Engine...\nJan 16 02:24:57 json-output-556300 dockerd[1012]: time=\"2024-01-16T02:24:57.914466634Z\" level=info msg=\"Starting up\"\nJan 16 02:25:57 json-output-556300 dockerd[1012]: failed to start daemon: failed to dial \"/run/containerd/containerd.sock\": failed to dial \"/run/containerd/containerd.sock\": context deadline exceeded\nJan 16 02:25:57 json-output-556300 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE\nJan 16 02:25:57 json-output-556300 systemd[1]: docker.service
: Failed with result 'exit-code'.\nJan 16 02:25:57 json-output-556300 systemd[1]: Failed to start Docker Application Container Engine.\n\n-- /stdout --","name":"RUNTIME_ENABLE","url":""}}
	{"specversion":"1.0","id":"01fe0be0-b689-454f-867b-4703185061d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:22:33.393197    5660 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe start -p json-output-556300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv": exit status 90
--- FAIL: TestJSONOutput/start/Command (204.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-556300 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe pause -p json-output-556300 --output=json --user=testUser: exit status 80 (8.5633104s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"534413d2-e560-483f-80d4-d29e1861c66b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-556300 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"5e457cc5-3e53-4e21-8c44-88e41f702ad1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1\nstdout:\n\nstderr:\nFailed to disable unit: Unit file kubelet.service does not exist.","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"a276f54c-f670-41b7-b28c-1404f71f8fc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                     │\n│    If the above advice does not help, please let us know:                                                           │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                         │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │\n│    Please also attach the following file to the GitHub issue:                                                       │\n│    - C:\\Users\\jenkins.minikube3\\AppData\\Local\\Temp\\minikube_pause_26475df06b51455fca7312b7aad83667d1d3f5a8_1.log    │\n│                                                                                                                     │\n╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:25:58.169277    6508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe pause -p json-output-556300 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (8.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (51.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-556300 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-windows-amd64.exe unpause -p json-output-556300 --output=json --user=testUser: exit status 80 (51.6513303s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f7a4ed71-56e4-4df8-aa4c-430b0ccc7338","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-556300 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"0d73bc6c-7fa2-4fa8-a032-a5ff60d4c364","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format=\u003cno value\u003e: Process exited with status 1\nstdout:\n\nstderr:\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"aa778df2-53b4-4ef6-973a-79854b86401c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                                                       │\n│    If the above advice does not help, please let us know:                                                             │\n│    https://github.com/kubernetes/minikube/issues/new/choose                                                           │\n│
│\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │\n│    Please also attach the following file to the GitHub issue:                                                         │\n│    - C:\\Users\\jenkins.minikube3\\AppData\\Local\\Temp\\minikube_unpause_61fcee5b7a02886a9c50c80a17ea0aa3a64e4614_1.log    │\n│                                                                                                                       │\n╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:26:06.722956    8048 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-windows-amd64.exe unpause -p json-output-556300 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (51.65s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (57.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- exec busybox-5bc68d56bd-9t8fh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- exec busybox-5bc68d56bd-9t8fh -- sh -c "ping -c 1 172.27.112.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- exec busybox-5bc68d56bd-9t8fh -- sh -c "ping -c 1 172.27.112.1": exit status 1 (10.5240747s)

                                                
                                                
-- stdout --
	PING 172.27.112.1 (172.27.112.1): 56 data bytes
	
	--- 172.27.112.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:52:05.684844   12960 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.27.112.1) from pod (busybox-5bc68d56bd-9t8fh): exit status 1
multinode_test.go:588: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- exec busybox-5bc68d56bd-fp6wc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- exec busybox-5bc68d56bd-fp6wc -- sh -c "ping -c 1 172.27.112.1"
multinode_test.go:599: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- exec busybox-5bc68d56bd-fp6wc -- sh -c "ping -c 1 172.27.112.1": exit status 1 (10.5492974s)

                                                
                                                
-- stdout --
	PING 172.27.112.1 (172.27.112.1): 56 data bytes
	
	--- 172.27.112.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:52:16.752288    8808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:600: Failed to ping host (172.27.112.1) from pod (busybox-5bc68d56bd-fp6wc): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-853900 -n multinode-853900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-853900 -n multinode-853900: (12.0085585s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 logs -n 25: (8.5192072s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-771100 ssh -- ls                    | mount-start-2-771100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:41 UTC | 16 Jan 24 02:41 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-771100                           | mount-start-1-771100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:41 UTC | 16 Jan 24 02:42 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-771100 ssh -- ls                    | mount-start-2-771100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:42 UTC | 16 Jan 24 02:42 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-771100                           | mount-start-2-771100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:42 UTC | 16 Jan 24 02:42 UTC |
	| start   | -p mount-start-2-771100                           | mount-start-2-771100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:42 UTC | 16 Jan 24 02:44 UTC |
	| mount   | C:\Users\jenkins.minikube3:/minikube-host         | mount-start-2-771100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:44 UTC |                     |
	|         | --profile mount-start-2-771100 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-771100 ssh -- ls                    | mount-start-2-771100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:44 UTC | 16 Jan 24 02:44 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-771100                           | mount-start-2-771100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:44 UTC | 16 Jan 24 02:45 UTC |
	| delete  | -p mount-start-1-771100                           | mount-start-1-771100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:45 UTC | 16 Jan 24 02:45 UTC |
	| start   | -p multinode-853900                               | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:45 UTC | 16 Jan 24 02:51 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- apply -f                   | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:51 UTC | 16 Jan 24 02:51 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- rollout                    | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:51 UTC | 16 Jan 24 02:51 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- get pods -o                | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:51 UTC | 16 Jan 24 02:51 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- get pods -o                | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:51 UTC | 16 Jan 24 02:51 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- exec                       | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:52 UTC | 16 Jan 24 02:52 UTC |
	|         | busybox-5bc68d56bd-9t8fh --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- exec                       | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:52 UTC | 16 Jan 24 02:52 UTC |
	|         | busybox-5bc68d56bd-fp6wc --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- exec                       | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:52 UTC | 16 Jan 24 02:52 UTC |
	|         | busybox-5bc68d56bd-9t8fh --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- exec                       | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:52 UTC | 16 Jan 24 02:52 UTC |
	|         | busybox-5bc68d56bd-fp6wc --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- exec                       | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:52 UTC | 16 Jan 24 02:52 UTC |
	|         | busybox-5bc68d56bd-9t8fh -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- exec                       | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:52 UTC | 16 Jan 24 02:52 UTC |
	|         | busybox-5bc68d56bd-fp6wc -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- get pods -o                | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:52 UTC | 16 Jan 24 02:52 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- exec                       | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:52 UTC | 16 Jan 24 02:52 UTC |
	|         | busybox-5bc68d56bd-9t8fh                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- exec                       | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:52 UTC |                     |
	|         | busybox-5bc68d56bd-9t8fh -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.112.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- exec                       | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:52 UTC | 16 Jan 24 02:52 UTC |
	|         | busybox-5bc68d56bd-fp6wc                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-853900 -- exec                       | multinode-853900     | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:52 UTC |                     |
	|         | busybox-5bc68d56bd-fp6wc -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.112.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 02:45:11
	Running on machine: minikube3
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 02:45:11.150977    6604 out.go:296] Setting OutFile to fd 796 ...
	I0116 02:45:11.151759    6604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:45:11.151759    6604 out.go:309] Setting ErrFile to fd 664...
	I0116 02:45:11.151759    6604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:45:11.176535    6604 out.go:303] Setting JSON to false
	I0116 02:45:11.179753    6604 start.go:128] hostinfo: {"hostname":"minikube3","uptime":51502,"bootTime":1705321609,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0116 02:45:11.180421    6604 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0116 02:45:11.181213    6604 out.go:177] * [multinode-853900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0116 02:45:11.181845    6604 notify.go:220] Checking for updates...
	I0116 02:45:11.183138    6604 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 02:45:11.183959    6604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:45:11.184804    6604 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0116 02:45:11.185168    6604 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:45:11.186156    6604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:45:11.188061    6604 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 02:45:16.516157    6604 out.go:177] * Using the hyperv driver based on user configuration
	I0116 02:45:16.516983    6604 start.go:298] selected driver: hyperv
	I0116 02:45:16.516983    6604 start.go:902] validating driver "hyperv" against <nil>
	I0116 02:45:16.516983    6604 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 02:45:16.563616    6604 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 02:45:16.565171    6604 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 02:45:16.565298    6604 cni.go:84] Creating CNI manager for ""
	I0116 02:45:16.565370    6604 cni.go:136] 0 nodes found, recommending kindnet
	I0116 02:45:16.565370    6604 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0116 02:45:16.565426    6604 start_flags.go:321] config:
	{Name:multinode-853900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:45:16.565426    6604 iso.go:125] acquiring lock: {Name:mk2c0b62d272a573835231fdc54419c800e07e34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 02:45:16.567626    6604 out.go:177] * Starting control plane node multinode-853900 in cluster multinode-853900
	I0116 02:45:16.568365    6604 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0116 02:45:16.568365    6604 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0116 02:45:16.568365    6604 cache.go:56] Caching tarball of preloaded images
	I0116 02:45:16.569197    6604 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0116 02:45:16.569414    6604 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0116 02:45:16.569907    6604 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json ...
	I0116 02:45:16.569907    6604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json: {Name:mk7ce8679dbc6c9d2269ab4ef79d63f209a40d00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:45:16.571326    6604 start.go:365] acquiring machines lock for multinode-853900: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:45:16.571540    6604 start.go:369] acquired machines lock for "multinode-853900" in 0s
	I0116 02:45:16.571540    6604 start.go:93] Provisioning new machine with config: &{Name:multinode-853900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0116 02:45:16.571540    6604 start.go:125] createHost starting for "" (driver="hyperv")
	I0116 02:45:16.572261    6604 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0116 02:45:16.572261    6604 start.go:159] libmachine.API.Create for "multinode-853900" (driver="hyperv")
	I0116 02:45:16.572261    6604 client.go:168] LocalClient.Create starting
	I0116 02:45:16.572261    6604 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0116 02:45:16.573584    6604 main.go:141] libmachine: Decoding PEM data...
	I0116 02:45:16.573682    6604 main.go:141] libmachine: Parsing certificate...
	I0116 02:45:16.573778    6604 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0116 02:45:16.574129    6604 main.go:141] libmachine: Decoding PEM data...
	I0116 02:45:16.574161    6604 main.go:141] libmachine: Parsing certificate...
	I0116 02:45:16.574328    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0116 02:45:18.650478    6604 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0116 02:45:18.650587    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:18.650587    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0116 02:45:20.377815    6604 main.go:141] libmachine: [stdout =====>] : False
	
	I0116 02:45:20.377994    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:20.378067    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0116 02:45:21.878737    6604 main.go:141] libmachine: [stdout =====>] : True
	
	I0116 02:45:21.878737    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:21.878737    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0116 02:45:25.428804    6604 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0116 02:45:25.428804    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:25.431762    6604 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 02:45:25.884825    6604 main.go:141] libmachine: Creating SSH key...
	I0116 02:45:26.145983    6604 main.go:141] libmachine: Creating VM...
	I0116 02:45:26.145983    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0116 02:45:28.938970    6604 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0116 02:45:28.939265    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:28.939409    6604 main.go:141] libmachine: Using switch "Default Switch"
	I0116 02:45:28.939409    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0116 02:45:30.677501    6604 main.go:141] libmachine: [stdout =====>] : True
	
	I0116 02:45:30.677501    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:30.677604    6604 main.go:141] libmachine: Creating VHD
	I0116 02:45:30.677604    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\fixed.vhd' -SizeBytes 10MB -Fixed
	I0116 02:45:34.465548    6604 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CCF58BA8-B3D6-47FD-BCEE-FE75066DB93F
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0116 02:45:34.465548    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:34.465548    6604 main.go:141] libmachine: Writing magic tar header
	I0116 02:45:34.465548    6604 main.go:141] libmachine: Writing SSH key tar header
	I0116 02:45:34.477746    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\disk.vhd' -VHDType Dynamic -DeleteSource
	I0116 02:45:37.627594    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:45:37.627594    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:37.627750    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\disk.vhd' -SizeBytes 20000MB
	I0116 02:45:40.183897    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:45:40.184100    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:40.184186    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-853900 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0116 02:45:43.740257    6604 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-853900 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0116 02:45:43.740518    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:43.740518    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-853900 -DynamicMemoryEnabled $false
	I0116 02:45:45.924810    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:45:45.924916    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:45.924916    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-853900 -Count 2
	I0116 02:45:48.044729    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:45:48.044729    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:48.044729    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-853900 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\boot2docker.iso'
	I0116 02:45:50.585925    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:45:50.586009    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:50.586086    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-853900 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\disk.vhd'
	I0116 02:45:53.131661    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:45:53.131760    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:53.131760    6604 main.go:141] libmachine: Starting VM...
	I0116 02:45:53.131897    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-853900
	I0116 02:45:55.945698    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:45:55.945855    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:55.945910    6604 main.go:141] libmachine: Waiting for host to start...
	I0116 02:45:55.945910    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:45:58.158144    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:45:58.158144    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:45:58.158252    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:46:00.662538    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:46:00.662538    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:01.677501    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:46:03.922016    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:46:03.922016    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:03.922105    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:46:06.415104    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:46:06.415104    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:07.416680    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:46:09.578118    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:46:09.578289    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:09.578369    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:46:12.135103    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:46:12.135103    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:13.137507    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:46:15.352235    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:46:15.352402    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:15.352477    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:46:17.898557    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:46:17.898657    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:18.900025    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:46:21.140915    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:46:21.140915    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:21.140915    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:46:23.648458    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:46:23.648458    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:23.648739    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:46:25.754158    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:46:25.754158    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:25.754357    6604 machine.go:88] provisioning docker machine ...
	I0116 02:46:25.754440    6604 buildroot.go:166] provisioning hostname "multinode-853900"
	I0116 02:46:25.754516    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:46:27.892820    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:46:27.892820    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:27.892909    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:46:30.419568    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:46:30.419568    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:30.425812    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:46:30.437292    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.112.69 22 <nil> <nil>}
	I0116 02:46:30.437292    6604 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-853900 && echo "multinode-853900" | sudo tee /etc/hostname
	I0116 02:46:30.588109    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-853900
	
	I0116 02:46:30.588109    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:46:32.712139    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:46:32.712139    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:32.712139    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:46:35.339356    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:46:35.339356    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:35.344999    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:46:35.345874    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.112.69 22 <nil> <nil>}
	I0116 02:46:35.345874    6604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-853900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-853900/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-853900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:46:35.503047    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:46:35.503151    6604 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0116 02:46:35.503262    6604 buildroot.go:174] setting up certificates
	I0116 02:46:35.503262    6604 provision.go:83] configureAuth start
	I0116 02:46:35.503445    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:46:37.635674    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:46:37.635947    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:37.635947    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:46:40.182242    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:46:40.182437    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:40.182437    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:46:42.318505    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:46:42.318505    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:42.318617    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:46:44.818840    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:46:44.818840    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:44.818922    6604 provision.go:138] copyHostCerts
	I0116 02:46:44.819072    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0116 02:46:44.819381    6604 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0116 02:46:44.819466    6604 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0116 02:46:44.819950    6604 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0116 02:46:44.820784    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0116 02:46:44.821339    6604 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0116 02:46:44.821436    6604 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0116 02:46:44.821436    6604 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0116 02:46:44.822670    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0116 02:46:44.822670    6604 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0116 02:46:44.822670    6604 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0116 02:46:44.823366    6604 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0116 02:46:44.825967    6604 provision.go:112] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-853900 san=[172.27.112.69 172.27.112.69 localhost 127.0.0.1 minikube multinode-853900]
	I0116 02:46:45.157485    6604 provision.go:172] copyRemoteCerts
	I0116 02:46:45.173525    6604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:46:45.173525    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:46:47.272751    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:46:47.272751    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:47.272982    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:46:49.738880    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:46:49.738929    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:49.738929    6604 sshutil.go:53] new ssh client: &{IP:172.27.112.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 02:46:49.847068    6604 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6735126s)
	I0116 02:46:49.847068    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0116 02:46:49.847068    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0116 02:46:49.888109    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0116 02:46:49.888516    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 02:46:49.924934    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0116 02:46:49.925900    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 02:46:49.963697    6604 provision.go:86] duration metric: configureAuth took 14.4602052s
	I0116 02:46:49.963697    6604 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:46:49.964344    6604 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 02:46:49.964410    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:46:52.029912    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:46:52.030119    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:52.030119    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:46:54.524896    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:46:54.524896    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:54.530087    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:46:54.530715    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.112.69 22 <nil> <nil>}
	I0116 02:46:54.530715    6604 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0116 02:46:54.675106    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0116 02:46:54.675106    6604 buildroot.go:70] root file system type: tmpfs
	I0116 02:46:54.675638    6604 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0116 02:46:54.675732    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:46:56.738711    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:46:56.739017    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:56.739087    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:46:59.228213    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:46:59.228213    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:46:59.234249    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:46:59.235082    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.112.69 22 <nil> <nil>}
	I0116 02:46:59.235082    6604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0116 02:46:59.399322    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0116 02:46:59.399534    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:47:01.507352    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:47:01.507352    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:01.507455    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:47:04.033991    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:47:04.033991    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:04.040247    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:47:04.041022    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.112.69 22 <nil> <nil>}
	I0116 02:47:04.041022    6604 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0116 02:47:05.000968    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0116 02:47:05.000968    6604 machine.go:91] provisioned docker machine in 39.2463563s
	I0116 02:47:05.000968    6604 client.go:171] LocalClient.Create took 1m48.4279974s
	I0116 02:47:05.000968    6604 start.go:167] duration metric: libmachine.API.Create for "multinode-853900" took 1m48.4279974s
	I0116 02:47:05.000968    6604 start.go:300] post-start starting for "multinode-853900" (driver="hyperv")
	I0116 02:47:05.000968    6604 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:47:05.018217    6604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:47:05.018217    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:47:07.132947    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:47:07.132947    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:07.133061    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:47:09.684548    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:47:09.684548    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:09.684827    6604 sshutil.go:53] new ssh client: &{IP:172.27.112.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 02:47:09.792857    6604 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7746091s)
	I0116 02:47:09.807258    6604 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:47:09.818236    6604 command_runner.go:130] > NAME=Buildroot
	I0116 02:47:09.818574    6604 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 02:47:09.818574    6604 command_runner.go:130] > ID=buildroot
	I0116 02:47:09.818624    6604 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 02:47:09.818624    6604 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 02:47:09.818680    6604 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:47:09.818739    6604 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0116 02:47:09.819123    6604 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0116 02:47:09.820375    6604 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> 135082.pem in /etc/ssl/certs
	I0116 02:47:09.820442    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> /etc/ssl/certs/135082.pem
	I0116 02:47:09.835043    6604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:47:09.851158    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem --> /etc/ssl/certs/135082.pem (1708 bytes)
	I0116 02:47:09.891379    6604 start.go:303] post-start completed in 4.890379s
	I0116 02:47:09.894499    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:47:12.021257    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:47:12.021508    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:12.021664    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:47:14.549543    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:47:14.549543    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:14.549823    6604 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json ...
	I0116 02:47:14.553530    6604 start.go:128] duration metric: createHost completed in 1m57.9812181s
	I0116 02:47:14.553530    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:47:16.698077    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:47:16.698077    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:16.698182    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:47:19.196652    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:47:19.196984    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:19.202828    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:47:19.203598    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.112.69 22 <nil> <nil>}
	I0116 02:47:19.203598    6604 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:47:19.341336    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705373239.341921638
	
	I0116 02:47:19.341336    6604 fix.go:206] guest clock: 1705373239.341921638
	I0116 02:47:19.341336    6604 fix.go:219] Guest: 2024-01-16 02:47:19.341921638 +0000 UTC Remote: 2024-01-16 02:47:14.5535305 +0000 UTC m=+123.568773301 (delta=4.788391138s)
	I0116 02:47:19.341965    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:47:21.469949    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:47:21.469949    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:21.470052    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:47:23.994888    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:47:23.994919    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:24.001365    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:47:24.002086    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.112.69 22 <nil> <nil>}
	I0116 02:47:24.002086    6604 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705373239
	I0116 02:47:24.151980    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan 16 02:47:19 UTC 2024
	
	I0116 02:47:24.151980    6604 fix.go:226] clock set: Tue Jan 16 02:47:19 UTC 2024
	 (err=<nil>)
	I0116 02:47:24.151980    6604 start.go:83] releasing machines lock for "multinode-853900", held for 2m7.5796059s
	I0116 02:47:24.151980    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:47:26.284449    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:47:26.284715    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:26.284776    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:47:28.804731    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:47:28.804731    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:28.809403    6604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:47:28.809580    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:47:28.825939    6604 ssh_runner.go:195] Run: cat /version.json
	I0116 02:47:28.825939    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:47:30.980929    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:47:30.981413    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:30.981413    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:47:31.003272    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:47:31.003272    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:31.003477    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:47:33.683831    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:47:33.683831    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:33.683831    6604 sshutil.go:53] new ssh client: &{IP:172.27.112.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 02:47:33.710776    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:47:33.710990    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:47:33.710990    6604 sshutil.go:53] new ssh client: &{IP:172.27.112.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 02:47:33.888927    6604 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 02:47:33.888927    6604 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0794907s)
	I0116 02:47:33.888927    6604 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0116 02:47:33.888927    6604 ssh_runner.go:235] Completed: cat /version.json: (5.0629552s)
	I0116 02:47:33.905508    6604 ssh_runner.go:195] Run: systemctl --version
	I0116 02:47:33.914792    6604 command_runner.go:130] > systemd 247 (247)
	I0116 02:47:33.914792    6604 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0116 02:47:33.928700    6604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:47:33.938112    6604 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 02:47:33.938572    6604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:47:33.952175    6604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:47:33.977383    6604 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0116 02:47:33.977383    6604 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 02:47:33.977383    6604 start.go:475] detecting cgroup driver to use...
	I0116 02:47:33.977383    6604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:47:34.012682    6604 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0116 02:47:34.026969    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0116 02:47:34.061387    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 02:47:34.083933    6604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0116 02:47:34.097649    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0116 02:47:34.127417    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 02:47:34.156780    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 02:47:34.186448    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 02:47:34.218386    6604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:47:34.248597    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 02:47:34.279084    6604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:47:34.294608    6604 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 02:47:34.310171    6604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:47:34.339163    6604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:47:34.522243    6604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 02:47:34.550154    6604 start.go:475] detecting cgroup driver to use...
	I0116 02:47:34.564348    6604 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0116 02:47:34.585507    6604 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0116 02:47:34.585507    6604 command_runner.go:130] > [Unit]
	I0116 02:47:34.585507    6604 command_runner.go:130] > Description=Docker Application Container Engine
	I0116 02:47:34.585507    6604 command_runner.go:130] > Documentation=https://docs.docker.com
	I0116 02:47:34.585507    6604 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0116 02:47:34.585507    6604 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0116 02:47:34.585507    6604 command_runner.go:130] > StartLimitBurst=3
	I0116 02:47:34.585507    6604 command_runner.go:130] > StartLimitIntervalSec=60
	I0116 02:47:34.585507    6604 command_runner.go:130] > [Service]
	I0116 02:47:34.585507    6604 command_runner.go:130] > Type=notify
	I0116 02:47:34.585507    6604 command_runner.go:130] > Restart=on-failure
	I0116 02:47:34.585507    6604 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0116 02:47:34.585507    6604 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0116 02:47:34.585507    6604 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0116 02:47:34.585507    6604 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0116 02:47:34.585507    6604 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0116 02:47:34.585507    6604 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0116 02:47:34.585507    6604 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0116 02:47:34.585507    6604 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0116 02:47:34.585507    6604 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0116 02:47:34.585507    6604 command_runner.go:130] > ExecStart=
	I0116 02:47:34.585507    6604 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0116 02:47:34.585507    6604 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0116 02:47:34.585507    6604 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0116 02:47:34.585507    6604 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0116 02:47:34.585507    6604 command_runner.go:130] > LimitNOFILE=infinity
	I0116 02:47:34.585507    6604 command_runner.go:130] > LimitNPROC=infinity
	I0116 02:47:34.585507    6604 command_runner.go:130] > LimitCORE=infinity
	I0116 02:47:34.585507    6604 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0116 02:47:34.585507    6604 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0116 02:47:34.585507    6604 command_runner.go:130] > TasksMax=infinity
	I0116 02:47:34.585507    6604 command_runner.go:130] > TimeoutStartSec=0
	I0116 02:47:34.585507    6604 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0116 02:47:34.586051    6604 command_runner.go:130] > Delegate=yes
	I0116 02:47:34.586051    6604 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0116 02:47:34.586051    6604 command_runner.go:130] > KillMode=process
	I0116 02:47:34.586051    6604 command_runner.go:130] > [Install]
	I0116 02:47:34.586051    6604 command_runner.go:130] > WantedBy=multi-user.target
	I0116 02:47:34.599001    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:47:34.629584    6604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:47:34.662745    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:47:34.692289    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 02:47:34.723565    6604 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 02:47:34.772928    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 02:47:34.792101    6604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:47:34.820709    6604 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0116 02:47:34.835291    6604 ssh_runner.go:195] Run: which cri-dockerd
	I0116 02:47:34.840994    6604 command_runner.go:130] > /usr/bin/cri-dockerd
	I0116 02:47:34.853944    6604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0116 02:47:34.868876    6604 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0116 02:47:34.908062    6604 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0116 02:47:35.092750    6604 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0116 02:47:35.234815    6604 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0116 02:47:35.235151    6604 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0116 02:47:35.276073    6604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:47:35.440910    6604 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0116 02:47:36.939634    6604 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4987135s)
	I0116 02:47:36.954645    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0116 02:47:36.986523    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0116 02:47:37.020433    6604 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0116 02:47:37.188826    6604 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0116 02:47:37.356090    6604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:47:37.519396    6604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0116 02:47:37.557471    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0116 02:47:37.590440    6604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:47:37.764310    6604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0116 02:47:37.871347    6604 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0116 02:47:37.886348    6604 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0116 02:47:37.896648    6604 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0116 02:47:37.896648    6604 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 02:47:37.896648    6604 command_runner.go:130] > Device: 16h/22d	Inode: 872         Links: 1
	I0116 02:47:37.896648    6604 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0116 02:47:37.896648    6604 command_runner.go:130] > Access: 2024-01-16 02:47:37.787626033 +0000
	I0116 02:47:37.896648    6604 command_runner.go:130] > Modify: 2024-01-16 02:47:37.787626033 +0000
	I0116 02:47:37.896648    6604 command_runner.go:130] > Change: 2024-01-16 02:47:37.791626033 +0000
	I0116 02:47:37.896648    6604 command_runner.go:130] >  Birth: -
	I0116 02:47:37.896802    6604 start.go:543] Will wait 60s for crictl version
	I0116 02:47:37.911205    6604 ssh_runner.go:195] Run: which crictl
	I0116 02:47:37.917443    6604 command_runner.go:130] > /usr/bin/crictl
	I0116 02:47:37.931377    6604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:47:38.014990    6604 command_runner.go:130] > Version:  0.1.0
	I0116 02:47:38.014990    6604 command_runner.go:130] > RuntimeName:  docker
	I0116 02:47:38.014990    6604 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0116 02:47:38.014990    6604 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 02:47:38.017426    6604 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0116 02:47:38.028880    6604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0116 02:47:38.065116    6604 command_runner.go:130] > 24.0.7
	I0116 02:47:38.077119    6604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0116 02:47:38.111251    6604 command_runner.go:130] > 24.0.7
	I0116 02:47:38.112499    6604 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0116 02:47:38.112582    6604 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0116 02:47:38.118723    6604 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0116 02:47:38.118723    6604 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0116 02:47:38.118723    6604 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0116 02:47:38.118723    6604 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:4e:7e Flags:up|broadcast|multicast|running}
	I0116 02:47:38.121707    6604 ip.go:210] interface addr: fe80::d699:fcba:3e2b:1549/64
	I0116 02:47:38.121707    6604 ip.go:210] interface addr: 172.27.112.1/20
	I0116 02:47:38.135107    6604 ssh_runner.go:195] Run: grep 172.27.112.1	host.minikube.internal$ /etc/hosts
	I0116 02:47:38.141074    6604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:47:38.159938    6604 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0116 02:47:38.170410    6604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0116 02:47:38.194819    6604 docker.go:685] Got preloaded images: 
	I0116 02:47:38.194819    6604 docker.go:691] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I0116 02:47:38.212181    6604 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0116 02:47:38.228712    6604 command_runner.go:139] > {"Repositories":{}}
	I0116 02:47:38.245433    6604 ssh_runner.go:195] Run: which lz4
	I0116 02:47:38.250763    6604 command_runner.go:130] > /usr/bin/lz4
	I0116 02:47:38.251052    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0116 02:47:38.265518    6604 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0116 02:47:38.272099    6604 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:47:38.273182    6604 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0116 02:47:38.273429    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (423165415 bytes)
	I0116 02:47:41.579906    6604 docker.go:649] Took 3.327790 seconds to copy over tarball
	I0116 02:47:41.593424    6604 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0116 02:47:50.281649    6604 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6881695s)
	I0116 02:47:50.281843    6604 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0116 02:47:50.352981    6604 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0116 02:47:50.370279    6604 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.9-0":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3":"sha256:73deb9a3f702532592a4167455f8
bf2e5f5d900bcc959ba2fd2d35c321de1af9"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.28.4":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.28.4":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.28.4":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021
a3a2899304398e"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.28.4":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0116 02:47:50.370887    6604 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0116 02:47:50.411817    6604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:47:50.579268    6604 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0116 02:47:53.348754    6604 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7685689s)
	I0116 02:47:53.361256    6604 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0116 02:47:53.387272    6604 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0116 02:47:53.387272    6604 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0116 02:47:53.387272    6604 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0116 02:47:53.387272    6604 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0116 02:47:53.387272    6604 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0116 02:47:53.387272    6604 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0116 02:47:53.387272    6604 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0116 02:47:53.387272    6604 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:47:53.388506    6604 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0116 02:47:53.388576    6604 cache_images.go:84] Images are preloaded, skipping loading
	I0116 02:47:53.399783    6604 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0116 02:47:53.436771    6604 command_runner.go:130] > cgroupfs
	I0116 02:47:53.438228    6604 cni.go:84] Creating CNI manager for ""
	I0116 02:47:53.438398    6604 cni.go:136] 1 nodes found, recommending kindnet
	I0116 02:47:53.438398    6604 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:47:53.438398    6604 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.112.69 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-853900 NodeName:multinode-853900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.112.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.112.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:47:53.438398    6604 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.112.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-853900"
	  kubeletExtraArgs:
	    node-ip: 172.27.112.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.112.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:47:53.438398    6604 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-853900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.112.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:47:53.453226    6604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:47:53.468745    6604 command_runner.go:130] > kubeadm
	I0116 02:47:53.468745    6604 command_runner.go:130] > kubectl
	I0116 02:47:53.468745    6604 command_runner.go:130] > kubelet
	I0116 02:47:53.468818    6604 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 02:47:53.484774    6604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 02:47:53.498806    6604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0116 02:47:53.526151    6604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:47:53.552811    6604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0116 02:47:53.595428    6604 ssh_runner.go:195] Run: grep 172.27.112.69	control-plane.minikube.internal$ /etc/hosts
	I0116 02:47:53.600980    6604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.112.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:47:53.619110    6604 certs.go:56] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900 for IP: 172.27.112.69
	I0116 02:47:53.619208    6604 certs.go:190] acquiring lock for shared ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:53.620115    6604 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0116 02:47:53.620594    6604 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0116 02:47:53.621684    6604 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\client.key
	I0116 02:47:53.621818    6604 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\client.crt with IP's: []
	I0116 02:47:53.786296    6604 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\client.crt ...
	I0116 02:47:53.787472    6604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\client.crt: {Name:mkf2d1472ed24ac981359fefdb9b57c9a0abafa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:53.788818    6604 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\client.key ...
	I0116 02:47:53.788818    6604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\client.key: {Name:mk81165ca3ac98c860079d2f93c182652fa323aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:53.789794    6604 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key.ccc58862
	I0116 02:47:53.789794    6604 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt.ccc58862 with IP's: [172.27.112.69 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 02:47:53.922236    6604 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt.ccc58862 ...
	I0116 02:47:53.922236    6604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt.ccc58862: {Name:mkef9b535bfaeaf0b590b7e0e4e37b2cbf5cc488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:53.924497    6604 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key.ccc58862 ...
	I0116 02:47:53.924497    6604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key.ccc58862: {Name:mk96e3b4d6e17c402425f760c5335c418846f490 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:53.924824    6604 certs.go:337] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt.ccc58862 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt
	I0116 02:47:53.936639    6604 certs.go:341] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key.ccc58862 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key
	I0116 02:47:53.937650    6604 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.key
	I0116 02:47:53.937650    6604 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.crt with IP's: []
	I0116 02:47:54.190936    6604 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.crt ...
	I0116 02:47:54.190936    6604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.crt: {Name:mk52c6e34c87fd9ec59e148744cba992f9270e41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:54.192744    6604 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.key ...
	I0116 02:47:54.192744    6604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.key: {Name:mkf6c89359b0e6ff6e5059f67067cb242d281bf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:47:54.193205    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 02:47:54.194380    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 02:47:54.194560    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 02:47:54.203828    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 02:47:54.204897    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:47:54.205047    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:47:54.205233    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:47:54.205410    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:47:54.205562    6604 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem (1338 bytes)
	W0116 02:47:54.206182    6604 certs.go:433] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508_empty.pem, impossibly tiny 0 bytes
	I0116 02:47:54.206182    6604 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0116 02:47:54.206464    6604 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0116 02:47:54.206788    6604 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0116 02:47:54.207056    6604 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0116 02:47:54.207404    6604 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem (1708 bytes)
	I0116 02:47:54.207404    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> /usr/share/ca-certificates/135082.pem
	I0116 02:47:54.208015    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:47:54.208250    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem -> /usr/share/ca-certificates/13508.pem
	I0116 02:47:54.209482    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 02:47:54.254720    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0116 02:47:54.297044    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 02:47:54.335354    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 02:47:54.381147    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:47:54.422037    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 02:47:54.461624    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:47:54.502978    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:47:54.543722    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem --> /usr/share/ca-certificates/135082.pem (1708 bytes)
	I0116 02:47:54.583879    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:47:54.624901    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem --> /usr/share/ca-certificates/13508.pem (1338 bytes)
	I0116 02:47:54.664289    6604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 02:47:54.708801    6604 ssh_runner.go:195] Run: openssl version
	I0116 02:47:54.716115    6604 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 02:47:54.730910    6604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135082.pem && ln -fs /usr/share/ca-certificates/135082.pem /etc/ssl/certs/135082.pem"
	I0116 02:47:54.761216    6604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135082.pem
	I0116 02:47:54.768581    6604 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 01:53 /usr/share/ca-certificates/135082.pem
	I0116 02:47:54.768777    6604 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 01:53 /usr/share/ca-certificates/135082.pem
	I0116 02:47:54.781764    6604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135082.pem
	I0116 02:47:54.789748    6604 command_runner.go:130] > 3ec20f2e
	I0116 02:47:54.804134    6604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/135082.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:47:54.835291    6604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:47:54.864606    6604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:47:54.870609    6604 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 01:40 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:47:54.870609    6604 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 01:40 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:47:54.886199    6604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:47:54.893122    6604 command_runner.go:130] > b5213941
	I0116 02:47:54.907602    6604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:47:54.936385    6604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13508.pem && ln -fs /usr/share/ca-certificates/13508.pem /etc/ssl/certs/13508.pem"
	I0116 02:47:54.966526    6604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13508.pem
	I0116 02:47:54.972915    6604 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 01:53 /usr/share/ca-certificates/13508.pem
	I0116 02:47:54.972915    6604 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 01:53 /usr/share/ca-certificates/13508.pem
	I0116 02:47:54.985945    6604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13508.pem
	I0116 02:47:54.994015    6604 command_runner.go:130] > 51391683
	I0116 02:47:55.008920    6604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13508.pem /etc/ssl/certs/51391683.0"
	I0116 02:47:55.039363    6604 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:47:55.039875    6604 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:47:55.039875    6604 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:47:55.039875    6604 kubeadm.go:404] StartCluster: {Name:multinode-853900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.112.69 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:47:55.053992    6604 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0116 02:47:55.097604    6604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 02:47:55.113144    6604 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0116 02:47:55.113230    6604 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0116 02:47:55.113230    6604 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0116 02:47:55.129195    6604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 02:47:55.159505    6604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 02:47:55.175982    6604 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0116 02:47:55.176985    6604 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0116 02:47:55.177015    6604 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0116 02:47:55.177015    6604 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:47:55.177015    6604 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 02:47:55.177015    6604 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0116 02:47:55.956958    6604 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:47:55.956958    6604 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:48:09.099820    6604 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0116 02:48:09.099914    6604 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I0116 02:48:09.099973    6604 kubeadm.go:322] [preflight] Running pre-flight checks
	I0116 02:48:09.100050    6604 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 02:48:09.100289    6604 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:48:09.100353    6604 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0116 02:48:09.100632    6604 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:48:09.100632    6604 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0116 02:48:09.100885    6604 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:48:09.100950    6604 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0116 02:48:09.101153    6604 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:48:09.101153    6604 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 02:48:09.102005    6604 out.go:204]   - Generating certificates and keys ...
	I0116 02:48:09.102203    6604 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0116 02:48:09.102266    6604 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0116 02:48:09.102400    6604 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0116 02:48:09.102400    6604 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0116 02:48:09.102668    6604 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:48:09.102732    6604 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0116 02:48:09.102858    6604 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:48:09.102916    6604 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0116 02:48:09.103063    6604 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0116 02:48:09.103127    6604 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0116 02:48:09.103254    6604 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0116 02:48:09.103421    6604 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0116 02:48:09.103564    6604 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0116 02:48:09.103564    6604 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0116 02:48:09.103949    6604 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-853900] and IPs [172.27.112.69 127.0.0.1 ::1]
	I0116 02:48:09.103949    6604 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-853900] and IPs [172.27.112.69 127.0.0.1 ::1]
	I0116 02:48:09.104081    6604 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0116 02:48:09.104161    6604 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0116 02:48:09.104560    6604 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-853900] and IPs [172.27.112.69 127.0.0.1 ::1]
	I0116 02:48:09.104560    6604 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-853900] and IPs [172.27.112.69 127.0.0.1 ::1]
	I0116 02:48:09.104560    6604 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:48:09.104560    6604 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0116 02:48:09.104560    6604 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:48:09.104560    6604 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0116 02:48:09.104560    6604 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0116 02:48:09.104560    6604 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0116 02:48:09.105110    6604 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:48:09.105110    6604 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 02:48:09.105176    6604 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:48:09.105265    6604 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 02:48:09.105488    6604 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:48:09.105488    6604 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 02:48:09.105678    6604 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:48:09.105678    6604 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 02:48:09.105807    6604 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:48:09.105807    6604 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 02:48:09.105984    6604 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:48:09.106055    6604 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 02:48:09.106213    6604 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:48:09.106272    6604 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 02:48:09.106947    6604 out.go:204]   - Booting up control plane ...
	I0116 02:48:09.107236    6604 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:48:09.107236    6604 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 02:48:09.107354    6604 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:48:09.107354    6604 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 02:48:09.107354    6604 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:48:09.107354    6604 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 02:48:09.108046    6604 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:48:09.108046    6604 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:48:09.108207    6604 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:48:09.108207    6604 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:48:09.108207    6604 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0116 02:48:09.108207    6604 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 02:48:09.108826    6604 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:48:09.108826    6604 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0116 02:48:09.108826    6604 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.006180 seconds
	I0116 02:48:09.108826    6604 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.006180 seconds
	I0116 02:48:09.108826    6604 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:48:09.108826    6604 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0116 02:48:09.108826    6604 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:48:09.108826    6604 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0116 02:48:09.108826    6604 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:48:09.108826    6604 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0116 02:48:09.109985    6604 command_runner.go:130] > [mark-control-plane] Marking the node multinode-853900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:48:09.109985    6604 kubeadm.go:322] [mark-control-plane] Marking the node multinode-853900 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0116 02:48:09.109985    6604 kubeadm.go:322] [bootstrap-token] Using token: p6ddki.6fom2nvvqfx3dl0s
	I0116 02:48:09.109985    6604 command_runner.go:130] > [bootstrap-token] Using token: p6ddki.6fom2nvvqfx3dl0s
	I0116 02:48:09.109985    6604 out.go:204]   - Configuring RBAC rules ...
	I0116 02:48:09.110915    6604 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:48:09.110915    6604 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0116 02:48:09.110915    6604 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:48:09.110915    6604 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0116 02:48:09.110915    6604 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:48:09.110915    6604 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0116 02:48:09.111967    6604 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:48:09.111967    6604 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0116 02:48:09.111967    6604 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:48:09.111967    6604 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0116 02:48:09.112530    6604 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:48:09.112583    6604 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0116 02:48:09.112583    6604 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:48:09.112583    6604 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0116 02:48:09.112583    6604 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0116 02:48:09.112583    6604 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0116 02:48:09.112583    6604 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0116 02:48:09.112583    6604 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0116 02:48:09.112583    6604 kubeadm.go:322] 
	I0116 02:48:09.112583    6604 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0116 02:48:09.112583    6604 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0116 02:48:09.112583    6604 kubeadm.go:322] 
	I0116 02:48:09.112583    6604 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0116 02:48:09.112583    6604 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0116 02:48:09.112583    6604 kubeadm.go:322] 
	I0116 02:48:09.112583    6604 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0116 02:48:09.112583    6604 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0116 02:48:09.113585    6604 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:48:09.113585    6604 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0116 02:48:09.113585    6604 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:48:09.113585    6604 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0116 02:48:09.113585    6604 kubeadm.go:322] 
	I0116 02:48:09.113585    6604 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0116 02:48:09.113585    6604 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0116 02:48:09.113585    6604 kubeadm.go:322] 
	I0116 02:48:09.113585    6604 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:48:09.113585    6604 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0116 02:48:09.113585    6604 kubeadm.go:322] 
	I0116 02:48:09.113585    6604 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0116 02:48:09.113585    6604 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0116 02:48:09.113585    6604 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:48:09.113585    6604 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0116 02:48:09.114590    6604 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:48:09.114590    6604 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0116 02:48:09.114590    6604 kubeadm.go:322] 
	I0116 02:48:09.114590    6604 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:48:09.114590    6604 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0116 02:48:09.114590    6604 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0116 02:48:09.114590    6604 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0116 02:48:09.114590    6604 kubeadm.go:322] 
	I0116 02:48:09.114590    6604 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token p6ddki.6fom2nvvqfx3dl0s \
	I0116 02:48:09.114590    6604 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token p6ddki.6fom2nvvqfx3dl0s \
	I0116 02:48:09.114590    6604 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 \
	I0116 02:48:09.114590    6604 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 \
	I0116 02:48:09.115582    6604 command_runner.go:130] > 	--control-plane 
	I0116 02:48:09.115582    6604 kubeadm.go:322] 	--control-plane 
	I0116 02:48:09.115582    6604 kubeadm.go:322] 
	I0116 02:48:09.115582    6604 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:48:09.115582    6604 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0116 02:48:09.115582    6604 kubeadm.go:322] 
	I0116 02:48:09.115582    6604 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token p6ddki.6fom2nvvqfx3dl0s \
	I0116 02:48:09.115582    6604 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token p6ddki.6fom2nvvqfx3dl0s \
	I0116 02:48:09.115582    6604 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 
	I0116 02:48:09.115582    6604 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 
	I0116 02:48:09.115582    6604 cni.go:84] Creating CNI manager for ""
	I0116 02:48:09.115582    6604 cni.go:136] 1 nodes found, recommending kindnet
	I0116 02:48:09.116594    6604 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 02:48:09.129626    6604 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:48:09.139441    6604 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 02:48:09.139566    6604 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 02:48:09.139566    6604 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 02:48:09.139566    6604 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:48:09.139566    6604 command_runner.go:130] > Access: 2024-01-16 02:46:20.640697600 +0000
	I0116 02:48:09.139566    6604 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 02:48:09.139566    6604 command_runner.go:130] > Change: 2024-01-16 02:46:11.185000000 +0000
	I0116 02:48:09.139670    6604 command_runner.go:130] >  Birth: -
	I0116 02:48:09.140100    6604 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:48:09.140191    6604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:48:09.189357    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:48:10.743645    6604 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0116 02:48:10.743715    6604 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0116 02:48:10.743766    6604 command_runner.go:130] > serviceaccount/kindnet created
	I0116 02:48:10.743766    6604 command_runner.go:130] > daemonset.apps/kindnet created
	I0116 02:48:10.743766    6604 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5543988s)
	I0116 02:48:10.743828    6604 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 02:48:10.760700    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:10.760700    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=multinode-853900 minikube.k8s.io/updated_at=2024_01_16T02_48_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:10.775265    6604 command_runner.go:130] > -16
	I0116 02:48:10.775265    6604 ops.go:34] apiserver oom_adj: -16
	I0116 02:48:10.913228    6604 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0116 02:48:10.931577    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:10.958050    6604 command_runner.go:130] > node/multinode-853900 labeled
	I0116 02:48:11.070178    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:11.446745    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:11.578334    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:11.938719    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:12.052619    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:12.436848    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:12.566079    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:12.944262    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:13.073489    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:13.441433    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:13.578842    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:13.946838    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:14.072150    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:14.446637    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:14.606503    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:14.933381    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:15.047673    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:15.439614    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:15.571638    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:15.943604    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:16.080016    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:16.432903    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:16.565835    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:16.937701    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:17.048316    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:17.438938    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:17.564265    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:17.945287    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:18.061950    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:18.444584    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:18.561257    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:18.947902    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:19.109590    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:19.435011    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:19.562023    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:19.940995    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:20.080717    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:20.445244    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:20.623648    6604 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0116 02:48:20.938960    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:48:21.111721    6604 command_runner.go:130] > NAME      SECRETS   AGE
	I0116 02:48:21.111721    6604 command_runner.go:130] > default   0         0s
	I0116 02:48:21.111721    6604 kubeadm.go:1088] duration metric: took 10.3677581s to wait for elevateKubeSystemPrivileges.
	I0116 02:48:21.111721    6604 kubeadm.go:406] StartCluster complete in 26.0716773s
	I0116 02:48:21.111721    6604 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:48:21.112266    6604 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 02:48:21.113867    6604 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:48:21.115252    6604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 02:48:21.115778    6604 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 02:48:21.115865    6604 addons.go:69] Setting storage-provisioner=true in profile "multinode-853900"
	I0116 02:48:21.115865    6604 addons.go:234] Setting addon storage-provisioner=true in "multinode-853900"
	I0116 02:48:21.115865    6604 host.go:66] Checking if "multinode-853900" exists ...
	I0116 02:48:21.115865    6604 addons.go:69] Setting default-storageclass=true in profile "multinode-853900"
	I0116 02:48:21.115865    6604 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-853900"
	I0116 02:48:21.115865    6604 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 02:48:21.116555    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:48:21.117246    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:48:21.133065    6604 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 02:48:21.133789    6604 kapi.go:59] client config for multinode-853900: &rest.Config{Host:"https://172.27.112.69:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x270c520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:48:21.136088    6604 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 02:48:21.136628    6604 round_trippers.go:463] GET https://172.27.112.69:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:48:21.136628    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:21.136772    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:21.136772    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:21.162807    6604 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0116 02:48:21.162807    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:21.162807    6604 round_trippers.go:580]     Audit-Id: 84ac214c-58c8-4c2f-8658-6cf4bd179e7b
	I0116 02:48:21.162807    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:21.162807    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:21.162807    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:21.163146    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:21.163146    6604 round_trippers.go:580]     Content-Length: 291
	I0116 02:48:21.163218    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:21 GMT
	I0116 02:48:21.163218    6604 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bb9e4be9-a821-417a-b943-b930d6cec07c","resourceVersion":"315","creationTimestamp":"2024-01-16T02:48:09Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:48:21.164202    6604 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bb9e4be9-a821-417a-b943-b930d6cec07c","resourceVersion":"315","creationTimestamp":"2024-01-16T02:48:09Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:48:21.164343    6604 round_trippers.go:463] PUT https://172.27.112.69:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:48:21.164393    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:21.164393    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:21.164425    6604 round_trippers.go:473]     Content-Type: application/json
	I0116 02:48:21.164425    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:21.180141    6604 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0116 02:48:21.180141    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:21.180141    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:21.180141    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:21.180141    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:21.180141    6604 round_trippers.go:580]     Content-Length: 291
	I0116 02:48:21.180141    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:21 GMT
	I0116 02:48:21.180141    6604 round_trippers.go:580]     Audit-Id: bbb8a793-7c3c-456a-80a4-a035c899b238
	I0116 02:48:21.180141    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:21.180141    6604 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bb9e4be9-a821-417a-b943-b930d6cec07c","resourceVersion":"319","creationTimestamp":"2024-01-16T02:48:09Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:48:21.433549    6604 command_runner.go:130] > apiVersion: v1
	I0116 02:48:21.433674    6604 command_runner.go:130] > data:
	I0116 02:48:21.433674    6604 command_runner.go:130] >   Corefile: |
	I0116 02:48:21.433674    6604 command_runner.go:130] >     .:53 {
	I0116 02:48:21.433674    6604 command_runner.go:130] >         errors
	I0116 02:48:21.433674    6604 command_runner.go:130] >         health {
	I0116 02:48:21.433674    6604 command_runner.go:130] >            lameduck 5s
	I0116 02:48:21.433674    6604 command_runner.go:130] >         }
	I0116 02:48:21.433674    6604 command_runner.go:130] >         ready
	I0116 02:48:21.433760    6604 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0116 02:48:21.433793    6604 command_runner.go:130] >            pods insecure
	I0116 02:48:21.433793    6604 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0116 02:48:21.433815    6604 command_runner.go:130] >            ttl 30
	I0116 02:48:21.433815    6604 command_runner.go:130] >         }
	I0116 02:48:21.433815    6604 command_runner.go:130] >         prometheus :9153
	I0116 02:48:21.433815    6604 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0116 02:48:21.433815    6604 command_runner.go:130] >            max_concurrent 1000
	I0116 02:48:21.433815    6604 command_runner.go:130] >         }
	I0116 02:48:21.433815    6604 command_runner.go:130] >         cache 30
	I0116 02:48:21.433890    6604 command_runner.go:130] >         loop
	I0116 02:48:21.433890    6604 command_runner.go:130] >         reload
	I0116 02:48:21.433890    6604 command_runner.go:130] >         loadbalance
	I0116 02:48:21.433890    6604 command_runner.go:130] >     }
	I0116 02:48:21.433890    6604 command_runner.go:130] > kind: ConfigMap
	I0116 02:48:21.433890    6604 command_runner.go:130] > metadata:
	I0116 02:48:21.433965    6604 command_runner.go:130] >   creationTimestamp: "2024-01-16T02:48:09Z"
	I0116 02:48:21.433965    6604 command_runner.go:130] >   name: coredns
	I0116 02:48:21.433965    6604 command_runner.go:130] >   namespace: kube-system
	I0116 02:48:21.433965    6604 command_runner.go:130] >   resourceVersion: "233"
	I0116 02:48:21.433965    6604 command_runner.go:130] >   uid: fe1f65b2-4581-48a1-8dac-27ca5a22cf1f
	I0116 02:48:21.434263    6604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.112.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0116 02:48:21.651107    6604 round_trippers.go:463] GET https://172.27.112.69:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:48:21.651187    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:21.651187    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:21.651187    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:21.683694    6604 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0116 02:48:21.684344    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:21.684344    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:21.684344    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:21.684344    6604 round_trippers.go:580]     Content-Length: 291
	I0116 02:48:21.684344    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:21 GMT
	I0116 02:48:21.684506    6604 round_trippers.go:580]     Audit-Id: 5defad3d-bea0-408e-93ca-4a98f79c2841
	I0116 02:48:21.684506    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:21.684506    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:21.687661    6604 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bb9e4be9-a821-417a-b943-b930d6cec07c","resourceVersion":"325","creationTimestamp":"2024-01-16T02:48:09Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0116 02:48:21.687661    6604 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-853900" context rescaled to 1 replicas
	I0116 02:48:21.687661    6604 start.go:223] Will wait 6m0s for node &{Name: IP:172.27.112.69 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0116 02:48:21.688654    6604 out.go:177] * Verifying Kubernetes components...
	I0116 02:48:21.716460    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:48:22.428425    6604 command_runner.go:130] > configmap/coredns replaced
	I0116 02:48:22.428547    6604 start.go:929] {"host.minikube.internal": 172.27.112.1} host record injected into CoreDNS's ConfigMap
	I0116 02:48:22.429430    6604 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 02:48:22.430255    6604 kapi.go:59] client config for multinode-853900: &rest.Config{Host:"https://172.27.112.69:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x270c520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:48:22.431194    6604 node_ready.go:35] waiting up to 6m0s for node "multinode-853900" to be "Ready" ...
	I0116 02:48:22.431326    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:22.431420    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:22.431420    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:22.431420    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:22.436477    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:48:22.436477    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:22.436579    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:22.436579    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:22 GMT
	I0116 02:48:22.436579    6604 round_trippers.go:580]     Audit-Id: bcee6934-b803-4465-be2d-578c870b6b4f
	I0116 02:48:22.436668    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:22.436706    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:22.436740    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:22.437088    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:22.936145    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:22.936200    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:22.936258    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:22.936258    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:22.939710    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:22.939710    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:22.939710    6604 round_trippers.go:580]     Audit-Id: 53a244ea-5f49-42e7-9493-2b3afa9f0d29
	I0116 02:48:22.939710    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:22.939710    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:22.939710    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:22.939710    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:22.940170    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:22 GMT
	I0116 02:48:22.940591    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:23.413699    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:48:23.413699    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:23.414524    6604 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 02:48:23.415279    6604 kapi.go:59] client config for multinode-853900: &rest.Config{Host:"https://172.27.112.69:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x270c520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:48:23.415941    6604 addons.go:234] Setting addon default-storageclass=true in "multinode-853900"
	I0116 02:48:23.416103    6604 host.go:66] Checking if "multinode-853900" exists ...
	I0116 02:48:23.417023    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:48:23.420092    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:48:23.420629    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:23.421477    6604 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 02:48:23.422211    6604 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:48:23.422211    6604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0116 02:48:23.422352    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:48:23.439862    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:23.439862    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:23.439862    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:23.439862    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:23.444866    6604 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 02:48:23.445025    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:23.445025    6604 round_trippers.go:580]     Audit-Id: c9a1113f-4d42-485d-ae1a-1b3be6343023
	I0116 02:48:23.445025    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:23.445025    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:23.445025    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:23.445139    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:23.445139    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:23 GMT
	I0116 02:48:23.445430    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:23.935559    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:23.935631    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:23.935631    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:23.935631    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:23.939249    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:23.940205    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:23.940261    6604 round_trippers.go:580]     Audit-Id: c18de7ab-1fe0-482a-a3a9-d6cabaf98597
	I0116 02:48:23.940261    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:23.940261    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:23.940261    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:23.940261    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:23.940350    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:23 GMT
	I0116 02:48:23.940616    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:24.444606    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:24.444606    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:24.444682    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:24.444682    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:24.449939    6604 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 02:48:24.450925    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:24.450956    6604 round_trippers.go:580]     Audit-Id: e3178602-dd69-47cc-bb35-5bee3843e144
	I0116 02:48:24.450956    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:24.450956    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:24.450956    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:24.450956    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:24.450956    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:24 GMT
	I0116 02:48:24.451300    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:24.451838    6604 node_ready.go:58] node "multinode-853900" has status "Ready":"False"
	I0116 02:48:24.936597    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:24.936597    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:24.936697    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:24.936697    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:24.940728    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:48:24.940932    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:24.940932    6604 round_trippers.go:580]     Audit-Id: 45521b15-1ce0-4509-905c-24b7b5198805
	I0116 02:48:24.940932    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:24.940932    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:24.941037    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:24.941062    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:24.941098    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:24 GMT
	I0116 02:48:24.941383    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:25.446079    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:25.446165    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:25.446165    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:25.446165    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:25.449480    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:25.450006    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:25.450006    6604 round_trippers.go:580]     Audit-Id: 6ba3985a-f30f-44a9-afa7-4c4bd5267e7b
	I0116 02:48:25.450006    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:25.450096    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:25.450129    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:25.450129    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:25.450129    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:25 GMT
	I0116 02:48:25.450567    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:25.631643    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:48:25.631643    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:25.631643    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:48:25.650577    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:48:25.650577    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:25.651595    6604 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0116 02:48:25.651595    6604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0116 02:48:25.651595    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:48:25.937590    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:25.937651    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:25.937651    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:25.937651    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:25.941011    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:25.942017    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:25.942017    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:25.942017    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:25 GMT
	I0116 02:48:25.942017    6604 round_trippers.go:580]     Audit-Id: 608cc3a2-7e63-449f-8d9e-4083c0618e4d
	I0116 02:48:25.942017    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:25.942017    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:25.942017    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:25.942467    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:26.447137    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:26.447137    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:26.447137    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:26.447137    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:26.450374    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:26.450374    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:26.450374    6604 round_trippers.go:580]     Audit-Id: 7dc474c7-bb9a-4725-b423-de20840d24c2
	I0116 02:48:26.450374    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:26.450374    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:26.450374    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:26.450374    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:26.450374    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:26 GMT
	I0116 02:48:26.451378    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:26.941100    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:26.941100    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:26.941100    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:26.941100    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:26.944196    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:26.944196    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:26.944196    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:26.944196    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:26 GMT
	I0116 02:48:26.944196    6604 round_trippers.go:580]     Audit-Id: 6dc89044-91cd-424b-94d1-c55aca396ba3
	I0116 02:48:26.944196    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:26.944196    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:26.945241    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:26.945424    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:26.946235    6604 node_ready.go:58] node "multinode-853900" has status "Ready":"False"
	I0116 02:48:27.433693    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:27.433754    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:27.433788    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:27.433788    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:27.443109    6604 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0116 02:48:27.443513    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:27.443513    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:27 GMT
	I0116 02:48:27.443513    6604 round_trippers.go:580]     Audit-Id: 7a6f1502-1f83-45df-a929-309db6915592
	I0116 02:48:27.443513    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:27.443513    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:27.443513    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:27.443603    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:27.443810    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:27.893865    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:48:27.893865    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:27.893865    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:48:27.942079    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:27.942166    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:27.942166    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:27.942166    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:27.945855    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:27.946234    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:27.946234    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:27.946234    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:27.946234    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:27.946234    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:27 GMT
	I0116 02:48:27.946234    6604 round_trippers.go:580]     Audit-Id: 1f78679e-8cc2-43bf-a8c5-045c07800072
	I0116 02:48:27.946303    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:27.946797    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:28.370564    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:48:28.370564    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:28.370850    6604 sshutil.go:53] new ssh client: &{IP:172.27.112.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 02:48:28.434179    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:28.434238    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:28.434238    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:28.434238    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:28.463594    6604 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0116 02:48:28.463594    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:28.463594    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:28.463594    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:28 GMT
	I0116 02:48:28.463594    6604 round_trippers.go:580]     Audit-Id: b1b23720-36b9-4258-b84a-c5ff637a2049
	I0116 02:48:28.463594    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:28.463594    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:28.463594    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:28.464377    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:28.601619    6604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0116 02:48:28.940163    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:28.940219    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:28.940219    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:28.940219    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:28.943433    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:28.943433    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:28.943876    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:28 GMT
	I0116 02:48:28.943876    6604 round_trippers.go:580]     Audit-Id: 5ae70e5f-3c7c-4904-b2d2-56e6175b622c
	I0116 02:48:28.943876    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:28.943876    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:28.943876    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:28.943876    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:28.944362    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:29.412916    6604 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0116 02:48:29.413014    6604 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0116 02:48:29.413014    6604 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0116 02:48:29.413084    6604 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0116 02:48:29.413084    6604 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0116 02:48:29.413144    6604 command_runner.go:130] > pod/storage-provisioner created
	I0116 02:48:29.445507    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:29.445507    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:29.445507    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:29.445507    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:29.450026    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:48:29.450161    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:29.450161    6604 round_trippers.go:580]     Audit-Id: 6c894af9-2fcd-46e1-860c-48ed68e212f5
	I0116 02:48:29.450161    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:29.450221    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:29.450221    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:29.450221    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:29.450221    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:29 GMT
	I0116 02:48:29.450678    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:29.451272    6604 node_ready.go:58] node "multinode-853900" has status "Ready":"False"
	I0116 02:48:29.940264    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:29.940264    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:29.940264    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:29.940391    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:29.943707    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:29.943707    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:29.944513    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:29.944513    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:29.944513    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:29.944513    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:29 GMT
	I0116 02:48:29.944584    6604 round_trippers.go:580]     Audit-Id: 20b5959c-9386-4ad8-b282-77aac424ddc8
	I0116 02:48:29.944584    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:29.944855    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:30.432551    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:30.432551    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:30.432675    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:30.432675    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:30.436470    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:30.436995    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:30.436995    6604 round_trippers.go:580]     Audit-Id: 8f62e6e3-66b9-466c-a1d5-39a0e2fa7c82
	I0116 02:48:30.436995    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:30.437090    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:30.437090    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:30.437132    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:30.437132    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:30 GMT
	I0116 02:48:30.437295    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:30.607491    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:48:30.607719    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:30.607893    6604 sshutil.go:53] new ssh client: &{IP:172.27.112.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 02:48:30.746066    6604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0116 02:48:30.941395    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:30.941395    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:30.941395    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:30.941395    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:30.947863    6604 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 02:48:30.947863    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:30.947863    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:30.947863    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:30.947863    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:30.947863    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:30 GMT
	I0116 02:48:30.947863    6604 round_trippers.go:580]     Audit-Id: d5107fa3-7a60-4093-9985-97eee147ec5a
	I0116 02:48:30.947863    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:30.948689    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:31.298562    6604 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0116 02:48:31.299612    6604 round_trippers.go:463] GET https://172.27.112.69:8443/apis/storage.k8s.io/v1/storageclasses
	I0116 02:48:31.299660    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:31.299692    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:31.299692    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:31.304080    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:48:31.304080    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:31.304080    6604 round_trippers.go:580]     Content-Length: 1273
	I0116 02:48:31.304080    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:31 GMT
	I0116 02:48:31.304186    6604 round_trippers.go:580]     Audit-Id: 51eb9ebd-f64d-4024-a902-f76c6e145946
	I0116 02:48:31.304186    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:31.304186    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:31.304223    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:31.304223    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:31.304277    6604 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"383"},"items":[{"metadata":{"name":"standard","uid":"a1ca3150-343e-4843-a280-01e9e8e2658b","resourceVersion":"383","creationTimestamp":"2024-01-16T02:48:31Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T02:48:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0116 02:48:31.305115    6604 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a1ca3150-343e-4843-a280-01e9e8e2658b","resourceVersion":"383","creationTimestamp":"2024-01-16T02:48:31Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T02:48:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0116 02:48:31.305222    6604 round_trippers.go:463] PUT https://172.27.112.69:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0116 02:48:31.305222    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:31.305222    6604 round_trippers.go:473]     Content-Type: application/json
	I0116 02:48:31.305222    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:31.305222    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:31.308762    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:31.308762    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:31.308762    6604 round_trippers.go:580]     Audit-Id: 074382fa-361f-4845-a696-c05063059754
	I0116 02:48:31.308762    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:31.308762    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:31.308762    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:31.308762    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:31.308762    6604 round_trippers.go:580]     Content-Length: 1220
	I0116 02:48:31.308762    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:31 GMT
	I0116 02:48:31.308762    6604 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a1ca3150-343e-4843-a280-01e9e8e2658b","resourceVersion":"383","creationTimestamp":"2024-01-16T02:48:31Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-01-16T02:48:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0116 02:48:31.310278    6604 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0116 02:48:31.311225    6604 addons.go:505] enable addons completed in 10.195907s: enabled=[storage-provisioner default-storageclass]
	I0116 02:48:31.433176    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:31.433176    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:31.433176    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:31.433176    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:31.438091    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:48:31.438129    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:31.438129    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:31.438129    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:31.438274    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:31.438274    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:31 GMT
	I0116 02:48:31.438274    6604 round_trippers.go:580]     Audit-Id: 3ad05066-b6e5-4dcf-90b0-094da2ee3ad5
	I0116 02:48:31.438274    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:31.438546    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:31.944581    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:31.944581    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:31.944692    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:31.944692    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:31.951625    6604 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 02:48:31.951684    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:31.951684    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:31 GMT
	I0116 02:48:31.951721    6604 round_trippers.go:580]     Audit-Id: c25361ba-4fd5-49a7-bfb2-cf8ba62953dd
	I0116 02:48:31.951721    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:31.951754    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:31.951754    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:31.951754    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:31.951754    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:31.952499    6604 node_ready.go:58] node "multinode-853900" has status "Ready":"False"
	I0116 02:48:32.434382    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:32.434382    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:32.434382    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:32.434382    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:32.439328    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:48:32.439328    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:32.439693    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:32.439693    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:32.439693    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:32.439693    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:32 GMT
	I0116 02:48:32.439815    6604 round_trippers.go:580]     Audit-Id: 4e478d61-2ece-4d9a-8442-c395c3435874
	I0116 02:48:32.439815    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:32.439970    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"318","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0116 02:48:32.938568    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:32.938621    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:32.938673    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:32.938673    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:32.946155    6604 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 02:48:32.946155    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:32.946155    6604 round_trippers.go:580]     Audit-Id: ccd393f9-43f3-493a-a68e-00204f9be108
	I0116 02:48:32.946155    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:32.946155    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:32.946155    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:32.946155    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:32.946155    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:32 GMT
	I0116 02:48:32.947120    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"390","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4854 chars]
	I0116 02:48:32.947120    6604 node_ready.go:49] node "multinode-853900" has status "Ready":"True"
	I0116 02:48:32.947120    6604 node_ready.go:38] duration metric: took 10.5158581s waiting for node "multinode-853900" to be "Ready" ...
	I0116 02:48:32.947120    6604 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:48:32.947120    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods
	I0116 02:48:32.947120    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:32.947120    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:32.947120    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:32.952177    6604 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 02:48:32.952177    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:32.952177    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:32.952177    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:32.952177    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:32.952177    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:32.952440    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:32 GMT
	I0116 02:48:32.952440    6604 round_trippers.go:580]     Audit-Id: ce81f1f8-ccf2-48b7-b526-b1f385f6b7c0
	I0116 02:48:32.954024    6604 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"390"},"items":[{"metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"351","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51376 chars]
	I0116 02:48:32.958848    6604 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:32.958848    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-62jpz
	I0116 02:48:32.958848    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:32.958848    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:32.958848    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:32.964528    6604 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 02:48:32.964528    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:32.964528    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:32 GMT
	I0116 02:48:32.964528    6604 round_trippers.go:580]     Audit-Id: 9c0e6367-e131-4bb9-b5ca-d8e79e021685
	I0116 02:48:32.964528    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:32.964528    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:32.964528    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:32.964528    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:32.964528    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"351","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 4943 chars]
	I0116 02:48:33.473025    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-62jpz
	I0116 02:48:33.473075    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:33.473132    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:33.473132    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:33.480204    6604 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 02:48:33.480204    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:33.480204    6604 round_trippers.go:580]     Audit-Id: 67a6a02f-1e49-4474-b736-ec599da15255
	I0116 02:48:33.480204    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:33.480204    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:33.480204    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:33.480204    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:33.480204    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:33 GMT
	I0116 02:48:33.480204    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"396","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0116 02:48:33.481752    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:33.481801    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:33.481801    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:33.481852    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:33.484187    6604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:48:33.484187    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:33.484187    6604 round_trippers.go:580]     Audit-Id: ec8443aa-a15a-402e-9d91-95ed481809e0
	I0116 02:48:33.484187    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:33.484187    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:33.484598    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:33.484598    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:33.484676    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:33 GMT
	I0116 02:48:33.484853    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"391","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0116 02:48:33.967994    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-62jpz
	I0116 02:48:33.968050    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:33.968108    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:33.968108    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:33.971981    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:33.972077    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:33.972077    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:33.972077    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:33 GMT
	I0116 02:48:33.972077    6604 round_trippers.go:580]     Audit-Id: 1c8ddad7-b85a-49a8-abc5-a4a754ddda71
	I0116 02:48:33.972077    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:33.972143    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:33.972143    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:33.972443    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"396","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0116 02:48:33.973193    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:33.973193    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:33.973193    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:33.973193    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:33.975845    6604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:48:33.976198    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:33.976198    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:33.976198    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:33.976198    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:33 GMT
	I0116 02:48:33.976260    6604 round_trippers.go:580]     Audit-Id: 75a8fa6f-03ce-421a-b9be-5f8a015791de
	I0116 02:48:33.976260    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:33.976260    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:33.976511    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"391","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0116 02:48:34.461676    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-62jpz
	I0116 02:48:34.461718    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:34.461776    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:34.461776    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:34.469903    6604 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0116 02:48:34.469903    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:34.469903    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:34.469903    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:34.469903    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:34.469903    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:34.469903    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:34 GMT
	I0116 02:48:34.469903    6604 round_trippers.go:580]     Audit-Id: 3b354da9-d448-4188-ab35-14ca1eb1727b
	I0116 02:48:34.470603    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"396","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0116 02:48:34.471504    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:34.472092    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:34.472092    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:34.472092    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:34.474746    6604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:48:34.474746    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:34.474746    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:34.474746    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:34.475555    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:34 GMT
	I0116 02:48:34.475555    6604 round_trippers.go:580]     Audit-Id: d0d23346-2445-426b-8e43-5b3554b21ed2
	I0116 02:48:34.475555    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:34.475555    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:34.475751    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"391","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0116 02:48:34.964532    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-62jpz
	I0116 02:48:34.964627    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:34.964627    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:34.964627    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:34.969300    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:34.969357    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:34.969357    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:34.969357    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:34.969357    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:34.969357    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:34.969357    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:34 GMT
	I0116 02:48:34.969452    6604 round_trippers.go:580]     Audit-Id: b45d9832-c10f-4230-8eea-942f20d73fab
	I0116 02:48:34.969865    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"396","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0116 02:48:34.970560    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:34.970648    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:34.970648    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:34.970648    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:34.974930    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:48:34.974930    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:34.974930    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:34.974930    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:34.974930    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:34.974930    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:34 GMT
	I0116 02:48:34.974930    6604 round_trippers.go:580]     Audit-Id: ad991630-bb16-46d4-a454-2ec7fb13297e
	I0116 02:48:34.974930    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:34.974930    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"391","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0116 02:48:34.974930    6604 pod_ready.go:102] pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace has status "Ready":"False"
	I0116 02:48:35.465118    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-62jpz
	I0116 02:48:35.465118    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:35.465118    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:35.465229    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:35.468928    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:35.468928    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:35.468928    6604 round_trippers.go:580]     Audit-Id: f45864f2-8d27-4cc6-8829-764ea5bffa08
	I0116 02:48:35.469659    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:35.469659    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:35.469659    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:35.469659    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:35.469659    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:35 GMT
	I0116 02:48:35.469999    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"410","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0116 02:48:35.470786    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:35.470889    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:35.470889    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:35.470889    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:35.473090    6604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:48:35.473090    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:35.473090    6604 round_trippers.go:580]     Audit-Id: 83d4a5a6-56a0-4feb-bcd5-14ece8f74afb
	I0116 02:48:35.473090    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:35.473090    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:35.473090    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:35.473090    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:35.473090    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:35 GMT
	I0116 02:48:35.474239    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"391","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0116 02:48:35.474239    6604 pod_ready.go:92] pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace has status "Ready":"True"
	I0116 02:48:35.474775    6604 pod_ready.go:81] duration metric: took 2.5159118s waiting for pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:35.474775    6604 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:35.474775    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-853900
	I0116 02:48:35.474775    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:35.474775    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:35.474775    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:35.478271    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:35.478271    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:35.478466    6604 round_trippers.go:580]     Audit-Id: 04f8854d-510d-48a7-a2c7-d82d32a96704
	I0116 02:48:35.478466    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:35.478466    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:35.478466    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:35.478534    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:35.478534    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:35 GMT
	I0116 02:48:35.478913    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-853900","namespace":"kube-system","uid":"384c4f82-a0f3-4576-b859-80837d0f109b","resourceVersion":"374","creationTimestamp":"2024-01-16T02:48:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.112.69:2379","kubernetes.io/config.hash":"c7b11a574a1c958cf64320e53e2315c6","kubernetes.io/config.mirror":"c7b11a574a1c958cf64320e53e2315c6","kubernetes.io/config.seen":"2024-01-16T02:48:09.211488777Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0116 02:48:35.479274    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:35.479274    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:35.479274    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:35.479274    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:35.481855    6604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:48:35.481855    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:35.482866    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:35.482866    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:35.482866    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:35.482866    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:35.482866    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:35 GMT
	I0116 02:48:35.482866    6604 round_trippers.go:580]     Audit-Id: 311b405c-a0a7-4c8c-8c3a-3d1e9cd76570
	I0116 02:48:35.482866    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"391","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0116 02:48:35.483409    6604 pod_ready.go:92] pod "etcd-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 02:48:35.483409    6604 pod_ready.go:81] duration metric: took 8.6339ms waiting for pod "etcd-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:35.483409    6604 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:35.483603    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-853900
	I0116 02:48:35.483603    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:35.483603    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:35.483603    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:35.486361    6604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:48:35.486361    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:35.486361    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:35.486361    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:35 GMT
	I0116 02:48:35.486361    6604 round_trippers.go:580]     Audit-Id: 75fa5616-51f0-4bad-936c-8bb2f7ea295d
	I0116 02:48:35.487265    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:35.487265    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:35.487265    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:35.487456    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-853900","namespace":"kube-system","uid":"a437ff8c-f27b-433b-97ac-dae3d276bc92","resourceVersion":"376","creationTimestamp":"2024-01-16T02:48:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.112.69:8443","kubernetes.io/config.hash":"41ea37f04f983128860ae937c9f060bb","kubernetes.io/config.mirror":"41ea37f04f983128860ae937c9f060bb","kubernetes.io/config.seen":"2024-01-16T02:48:00.146128309Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0116 02:48:35.487620    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:35.487620    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:35.487620    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:35.487620    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:35.490391    6604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:48:35.490391    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:35.491333    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:35.491333    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:35 GMT
	I0116 02:48:35.491333    6604 round_trippers.go:580]     Audit-Id: 0d9573d6-6e93-4823-abb7-effb43fc6396
	I0116 02:48:35.491333    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:35.491333    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:35.491333    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:35.491553    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"391","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0116 02:48:35.491919    6604 pod_ready.go:92] pod "kube-apiserver-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 02:48:35.491979    6604 pod_ready.go:81] duration metric: took 8.5695ms waiting for pod "kube-apiserver-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:35.491979    6604 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:35.492111    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-853900
	I0116 02:48:35.492111    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:35.492111    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:35.492170    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:35.494596    6604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:48:35.494596    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:35.494596    6604 round_trippers.go:580]     Audit-Id: a079a524-fca6-4aa9-8439-a40e3084c603
	I0116 02:48:35.494596    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:35.494596    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:35.495056    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:35.495056    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:35.495056    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:35 GMT
	I0116 02:48:35.495293    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-853900","namespace":"kube-system","uid":"5a4d4e86-9836-401a-8d98-1519ff75a0ec","resourceVersion":"378","creationTimestamp":"2024-01-16T02:48:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f09e1ab837c9ef5b247e4d57afe8993b","kubernetes.io/config.mirror":"f09e1ab837c9ef5b247e4d57afe8993b","kubernetes.io/config.seen":"2024-01-16T02:48:00.146129509Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0116 02:48:35.496000    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:35.496078    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:35.496078    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:35.496078    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:35.499779    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:35.499779    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:35.499779    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:35 GMT
	I0116 02:48:35.499779    6604 round_trippers.go:580]     Audit-Id: 95bf30de-47f5-4a46-8f01-2ed4f5c43a67
	I0116 02:48:35.499779    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:35.499779    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:35.499912    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:35.499912    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:35.500092    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"391","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0116 02:48:35.500092    6604 pod_ready.go:92] pod "kube-controller-manager-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 02:48:35.500092    6604 pod_ready.go:81] duration metric: took 8.1136ms waiting for pod "kube-controller-manager-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:35.500092    6604 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tpc2g" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:35.500720    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tpc2g
	I0116 02:48:35.500720    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:35.500720    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:35.500720    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:35.503104    6604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:48:35.503104    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:35.504053    6604 round_trippers.go:580]     Audit-Id: e7cd86b7-fb2d-45fa-933d-15e15cb76aac
	I0116 02:48:35.504053    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:35.504053    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:35.504053    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:35.504053    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:35.504053    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:35 GMT
	I0116 02:48:35.504053    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tpc2g","generateName":"kube-proxy-","namespace":"kube-system","uid":"0cb279ef-9d3a-4c55-9c57-ce7eede8a052","resourceVersion":"368","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0116 02:48:35.504860    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:35.504925    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:35.504925    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:35.504925    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:35.507734    6604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:48:35.508488    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:35.508488    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:35.508488    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:35.508592    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:35 GMT
	I0116 02:48:35.508680    6604 round_trippers.go:580]     Audit-Id: 00559e57-5673-482e-9728-82484c448341
	I0116 02:48:35.508780    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:35.508780    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:35.508898    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"391","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0116 02:48:35.508898    6604 pod_ready.go:92] pod "kube-proxy-tpc2g" in "kube-system" namespace has status "Ready":"True"
	I0116 02:48:35.509444    6604 pod_ready.go:81] duration metric: took 9.3511ms waiting for pod "kube-proxy-tpc2g" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:35.509444    6604 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:35.665930    6604 request.go:629] Waited for 156.4856ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-853900
	I0116 02:48:35.666167    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-853900
	I0116 02:48:35.666167    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:35.666167    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:35.666255    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:35.673758    6604 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 02:48:35.673758    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:35.673758    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:35.673758    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:35.673758    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:35.673758    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:35 GMT
	I0116 02:48:35.673758    6604 round_trippers.go:580]     Audit-Id: 959f3da8-2830-4492-830c-e9f0f68a134e
	I0116 02:48:35.673758    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:35.674443    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-853900","namespace":"kube-system","uid":"d75db7e3-c171-428f-9c08-f268ce16da31","resourceVersion":"354","creationTimestamp":"2024-01-16T02:48:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aff36fe37a6d6fc8d309826a0f54f93d","kubernetes.io/config.mirror":"aff36fe37a6d6fc8d309826a0f54f93d","kubernetes.io/config.seen":"2024-01-16T02:48:09.211494477Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0116 02:48:35.867948    6604 request.go:629] Waited for 193.2733ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:35.868180    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:48:35.868180    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:35.868180    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:35.868180    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:35.872911    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:48:35.872911    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:35.872911    6604 round_trippers.go:580]     Audit-Id: 30792dc6-4178-413e-a8ca-4aad3921c003
	I0116 02:48:35.872911    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:35.872911    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:35.873355    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:35.873355    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:35.873355    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:35 GMT
	I0116 02:48:35.873602    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"391","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0116 02:48:35.874018    6604 pod_ready.go:92] pod "kube-scheduler-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 02:48:35.874289    6604 pod_ready.go:81] duration metric: took 364.8435ms waiting for pod "kube-scheduler-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:48:35.874289    6604 pod_ready.go:38] duration metric: took 2.9271512s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:48:35.874416    6604 api_server.go:52] waiting for apiserver process to appear ...
	I0116 02:48:35.888327    6604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 02:48:35.911781    6604 command_runner.go:130] > 2024
	I0116 02:48:35.911781    6604 api_server.go:72] duration metric: took 14.2240293s to wait for apiserver process to appear ...
	I0116 02:48:35.911781    6604 api_server.go:88] waiting for apiserver healthz status ...
	I0116 02:48:35.912780    6604 api_server.go:253] Checking apiserver healthz at https://172.27.112.69:8443/healthz ...
	I0116 02:48:35.925287    6604 api_server.go:279] https://172.27.112.69:8443/healthz returned 200:
	ok
	I0116 02:48:35.925887    6604 round_trippers.go:463] GET https://172.27.112.69:8443/version
	I0116 02:48:35.925929    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:35.925929    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:35.925960    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:35.927676    6604 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 02:48:35.927676    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:35.927676    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:35.927676    6604 round_trippers.go:580]     Content-Length: 264
	I0116 02:48:35.927676    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:35 GMT
	I0116 02:48:35.928091    6604 round_trippers.go:580]     Audit-Id: b1d14dd9-6579-4fb5-97ac-28bca08ddacf
	I0116 02:48:35.928091    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:35.928091    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:35.928091    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:35.928091    6604 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0116 02:48:35.928275    6604 api_server.go:141] control plane version: v1.28.4
	I0116 02:48:35.928311    6604 api_server.go:131] duration metric: took 16.5301ms to wait for apiserver health ...
	I0116 02:48:35.928311    6604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 02:48:36.070928    6604 request.go:629] Waited for 142.417ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods
	I0116 02:48:36.071178    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods
	I0116 02:48:36.071178    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:36.071178    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:36.071178    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:36.078737    6604 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 02:48:36.078737    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:36.078737    6604 round_trippers.go:580]     Audit-Id: f4520575-d3c5-4a81-94b9-f56b317cae6f
	I0116 02:48:36.078737    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:36.078737    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:36.078737    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:36.078737    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:36.078737    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:36 GMT
	I0116 02:48:36.079959    6604 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"410","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I0116 02:48:36.083099    6604 system_pods.go:59] 8 kube-system pods found
	I0116 02:48:36.083215    6604 system_pods.go:61] "coredns-5dd5756b68-62jpz" [c028c1eb-0071-40bf-a163-6f71a10dc945] Running
	I0116 02:48:36.083215    6604 system_pods.go:61] "etcd-multinode-853900" [384c4f82-a0f3-4576-b859-80837d0f109b] Running
	I0116 02:48:36.083262    6604 system_pods.go:61] "kindnet-x5nvv" [2c841275-aff6-41c4-a995-5265f31aaa2d] Running
	I0116 02:48:36.083262    6604 system_pods.go:61] "kube-apiserver-multinode-853900" [a437ff8c-f27b-433b-97ac-dae3d276bc92] Running
	I0116 02:48:36.083293    6604 system_pods.go:61] "kube-controller-manager-multinode-853900" [5a4d4e86-9836-401a-8d98-1519ff75a0ec] Running
	I0116 02:48:36.083293    6604 system_pods.go:61] "kube-proxy-tpc2g" [0cb279ef-9d3a-4c55-9c57-ce7eede8a052] Running
	I0116 02:48:36.083293    6604 system_pods.go:61] "kube-scheduler-multinode-853900" [d75db7e3-c171-428f-9c08-f268ce16da31] Running
	I0116 02:48:36.083293    6604 system_pods.go:61] "storage-provisioner" [5a08e24f-688d-4839-9157-d9a0b92bd32c] Running
	I0116 02:48:36.083293    6604 system_pods.go:74] duration metric: took 154.9804ms to wait for pod list to return data ...
	I0116 02:48:36.083293    6604 default_sa.go:34] waiting for default service account to be created ...
	I0116 02:48:36.273563    6604 request.go:629] Waited for 190.0709ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.112.69:8443/api/v1/namespaces/default/serviceaccounts
	I0116 02:48:36.273866    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/default/serviceaccounts
	I0116 02:48:36.273866    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:36.273866    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:36.273866    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:36.277669    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:36.277669    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:36.277669    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:36.277669    6604 round_trippers.go:580]     Content-Length: 261
	I0116 02:48:36.278628    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:36 GMT
	I0116 02:48:36.278628    6604 round_trippers.go:580]     Audit-Id: feea3bd6-0a57-495c-b63f-bef80bb3d9d9
	I0116 02:48:36.278628    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:36.278628    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:36.278628    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:36.278735    6604 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"e81a9b1b-f727-439a-873c-17af64fc234f","resourceVersion":"308","creationTimestamp":"2024-01-16T02:48:21Z"}}]}
	I0116 02:48:36.278735    6604 default_sa.go:45] found service account: "default"
	I0116 02:48:36.278735    6604 default_sa.go:55] duration metric: took 195.4417ms for default service account to be created ...
	I0116 02:48:36.278735    6604 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 02:48:36.476372    6604 request.go:629] Waited for 197.3707ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods
	I0116 02:48:36.476449    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods
	I0116 02:48:36.476509    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:36.476509    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:36.476509    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:36.484980    6604 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0116 02:48:36.484980    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:36.484980    6604 round_trippers.go:580]     Audit-Id: 808f8b40-869c-45e4-b175-a116c7b0a0bd
	I0116 02:48:36.484980    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:36.484980    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:36.484980    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:36.484980    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:36.484980    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:36 GMT
	I0116 02:48:36.487155    6604 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"415"},"items":[{"metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"410","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54088 chars]
	I0116 02:48:36.489857    6604 system_pods.go:86] 8 kube-system pods found
	I0116 02:48:36.489857    6604 system_pods.go:89] "coredns-5dd5756b68-62jpz" [c028c1eb-0071-40bf-a163-6f71a10dc945] Running
	I0116 02:48:36.489857    6604 system_pods.go:89] "etcd-multinode-853900" [384c4f82-a0f3-4576-b859-80837d0f109b] Running
	I0116 02:48:36.489857    6604 system_pods.go:89] "kindnet-x5nvv" [2c841275-aff6-41c4-a995-5265f31aaa2d] Running
	I0116 02:48:36.489857    6604 system_pods.go:89] "kube-apiserver-multinode-853900" [a437ff8c-f27b-433b-97ac-dae3d276bc92] Running
	I0116 02:48:36.489857    6604 system_pods.go:89] "kube-controller-manager-multinode-853900" [5a4d4e86-9836-401a-8d98-1519ff75a0ec] Running
	I0116 02:48:36.489857    6604 system_pods.go:89] "kube-proxy-tpc2g" [0cb279ef-9d3a-4c55-9c57-ce7eede8a052] Running
	I0116 02:48:36.489857    6604 system_pods.go:89] "kube-scheduler-multinode-853900" [d75db7e3-c171-428f-9c08-f268ce16da31] Running
	I0116 02:48:36.489857    6604 system_pods.go:89] "storage-provisioner" [5a08e24f-688d-4839-9157-d9a0b92bd32c] Running
	I0116 02:48:36.489857    6604 system_pods.go:126] duration metric: took 211.1203ms to wait for k8s-apps to be running ...
	I0116 02:48:36.489857    6604 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:48:36.503061    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:48:36.528165    6604 system_svc.go:56] duration metric: took 38.3077ms WaitForService to wait for kubelet.
	I0116 02:48:36.528324    6604 kubeadm.go:581] duration metric: took 14.8405683s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:48:36.528416    6604 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:48:36.678612    6604 request.go:629] Waited for 149.8552ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.112.69:8443/api/v1/nodes
	I0116 02:48:36.678898    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes
	I0116 02:48:36.678898    6604 round_trippers.go:469] Request Headers:
	I0116 02:48:36.678898    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:48:36.678898    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:48:36.682851    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:48:36.682851    6604 round_trippers.go:577] Response Headers:
	I0116 02:48:36.683452    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:48:36 GMT
	I0116 02:48:36.683452    6604 round_trippers.go:580]     Audit-Id: d569b728-4316-472f-991c-2fe21d1ad78f
	I0116 02:48:36.683452    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:48:36.683452    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:48:36.683452    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:48:36.683452    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:48:36.683506    6604 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"416"},"items":[{"metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"391","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
	I0116 02:48:36.684137    6604 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:48:36.684351    6604 node_conditions.go:123] node cpu capacity is 2
	I0116 02:48:36.684351    6604 node_conditions.go:105] duration metric: took 155.7955ms to run NodePressure ...
	I0116 02:48:36.684351    6604 start.go:228] waiting for startup goroutines ...
	I0116 02:48:36.684351    6604 start.go:233] waiting for cluster config update ...
	I0116 02:48:36.684351    6604 start.go:242] writing updated cluster config ...
	I0116 02:48:36.686511    6604 out.go:177] 
	I0116 02:48:36.697658    6604 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 02:48:36.698734    6604 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json ...
	I0116 02:48:36.702513    6604 out.go:177] * Starting worker node multinode-853900-m02 in cluster multinode-853900
	I0116 02:48:36.703203    6604 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0116 02:48:36.703203    6604 cache.go:56] Caching tarball of preloaded images
	I0116 02:48:36.703406    6604 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0116 02:48:36.704044    6604 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0116 02:48:36.704212    6604 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json ...
	I0116 02:48:36.714374    6604 start.go:365] acquiring machines lock for multinode-853900-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 02:48:36.714374    6604 start.go:369] acquired machines lock for "multinode-853900-m02" in 0s
	I0116 02:48:36.714374    6604 start.go:93] Provisioning new machine with config: &{Name:multinode-853900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.112.69 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0116 02:48:36.714374    6604 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0116 02:48:36.715341    6604 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0116 02:48:36.715341    6604 start.go:159] libmachine.API.Create for "multinode-853900" (driver="hyperv")
	I0116 02:48:36.715341    6604 client.go:168] LocalClient.Create starting
	I0116 02:48:36.716351    6604 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0116 02:48:36.716351    6604 main.go:141] libmachine: Decoding PEM data...
	I0116 02:48:36.716351    6604 main.go:141] libmachine: Parsing certificate...
	I0116 02:48:36.716351    6604 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0116 02:48:36.716351    6604 main.go:141] libmachine: Decoding PEM data...
	I0116 02:48:36.716351    6604 main.go:141] libmachine: Parsing certificate...
	I0116 02:48:36.716351    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0116 02:48:38.608921    6604 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0116 02:48:38.608921    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:38.609024    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0116 02:48:40.391098    6604 main.go:141] libmachine: [stdout =====>] : False
	
	I0116 02:48:40.391098    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:40.391193    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0116 02:48:41.907100    6604 main.go:141] libmachine: [stdout =====>] : True
	
	I0116 02:48:41.907342    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:41.907342    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0116 02:48:45.485740    6604 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0116 02:48:45.485740    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:45.488435    6604 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0116 02:48:45.904762    6604 main.go:141] libmachine: Creating SSH key...
	I0116 02:48:46.054821    6604 main.go:141] libmachine: Creating VM...
	I0116 02:48:46.054821    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0116 02:48:49.004290    6604 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0116 02:48:49.004290    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:49.004290    6604 main.go:141] libmachine: Using switch "Default Switch"
	I0116 02:48:49.004290    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0116 02:48:50.757730    6604 main.go:141] libmachine: [stdout =====>] : True
	
	I0116 02:48:50.757995    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:50.757995    6604 main.go:141] libmachine: Creating VHD
	I0116 02:48:50.757995    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0116 02:48:54.454609    6604 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C292CABC-914A-49FA-8111-0405F4290D9D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0116 02:48:54.454727    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:54.454850    6604 main.go:141] libmachine: Writing magic tar header
	I0116 02:48:54.454850    6604 main.go:141] libmachine: Writing SSH key tar header
	I0116 02:48:54.464073    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0116 02:48:57.606928    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:48:57.606928    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:48:57.607009    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\disk.vhd' -SizeBytes 20000MB
	I0116 02:49:00.116319    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:49:00.116319    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:00.116446    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-853900-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0116 02:49:03.723456    6604 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-853900-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0116 02:49:03.723528    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:03.723528    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-853900-m02 -DynamicMemoryEnabled $false
	I0116 02:49:05.987282    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:49:05.987507    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:05.987507    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-853900-m02 -Count 2
	I0116 02:49:08.189587    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:49:08.189587    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:08.189693    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-853900-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\boot2docker.iso'
	I0116 02:49:11.052781    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:49:11.053006    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:11.053006    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-853900-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\disk.vhd'
	I0116 02:49:13.705661    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:49:13.705972    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:13.705972    6604 main.go:141] libmachine: Starting VM...
	I0116 02:49:13.706073    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-853900-m02
	I0116 02:49:16.646946    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:49:16.647095    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:16.647161    6604 main.go:141] libmachine: Waiting for host to start...
	I0116 02:49:16.647161    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:49:19.004182    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:49:19.004182    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:19.004182    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:49:21.542884    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:49:21.543143    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:22.544953    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:49:24.772382    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:49:24.772382    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:24.772454    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:49:27.320042    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:49:27.320042    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:28.334147    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:49:30.500617    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:49:30.500694    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:30.500694    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:49:33.025099    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:49:33.025099    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:34.029830    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:49:36.305740    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:49:36.305740    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:36.305740    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:49:38.821940    6604 main.go:141] libmachine: [stdout =====>] : 
	I0116 02:49:38.822305    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:39.823340    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:49:42.085520    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:49:42.085854    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:42.085951    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:49:44.646029    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:49:44.646029    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:44.646122    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:49:46.761732    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:49:46.761732    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:46.761962    6604 machine.go:88] provisioning docker machine ...
	I0116 02:49:46.761962    6604 buildroot.go:166] provisioning hostname "multinode-853900-m02"
	I0116 02:49:46.761962    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:49:48.902288    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:49:48.902288    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:48.902369    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:49:51.440801    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:49:51.440801    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:51.447179    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:49:51.457494    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.122.78 22 <nil> <nil>}
	I0116 02:49:51.457494    6604 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-853900-m02 && echo "multinode-853900-m02" | sudo tee /etc/hostname
	I0116 02:49:51.626169    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-853900-m02
	
	I0116 02:49:51.626169    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:49:53.700449    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:49:53.700449    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:53.700638    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:49:56.229252    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:49:56.229446    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:56.235445    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:49:56.235700    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.122.78 22 <nil> <nil>}
	I0116 02:49:56.235700    6604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-853900-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-853900-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-853900-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 02:49:56.402556    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 02:49:56.402609    6604 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0116 02:49:56.402609    6604 buildroot.go:174] setting up certificates
	I0116 02:49:56.402739    6604 provision.go:83] configureAuth start
	I0116 02:49:56.402739    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:49:58.499307    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:49:58.499551    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:49:58.499551    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:50:01.040840    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:50:01.041043    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:01.041043    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:50:03.223570    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:50:03.223864    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:03.223864    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:50:05.804931    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:50:05.804931    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:05.804931    6604 provision.go:138] copyHostCerts
	I0116 02:50:05.805170    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0116 02:50:05.805506    6604 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0116 02:50:05.805506    6604 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0116 02:50:05.805865    6604 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0116 02:50:05.807045    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0116 02:50:05.807327    6604 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0116 02:50:05.807374    6604 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0116 02:50:05.807651    6604 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0116 02:50:05.808832    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0116 02:50:05.809120    6604 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0116 02:50:05.809120    6604 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0116 02:50:05.809465    6604 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0116 02:50:05.810514    6604 provision.go:112] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-853900-m02 san=[172.27.122.78 172.27.122.78 localhost 127.0.0.1 minikube multinode-853900-m02]
	I0116 02:50:06.073411    6604 provision.go:172] copyRemoteCerts
	I0116 02:50:06.089754    6604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 02:50:06.089835    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:50:08.188749    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:50:08.188749    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:08.188842    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:50:10.716716    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:50:10.716912    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:10.717055    6604 sshutil.go:53] new ssh client: &{IP:172.27.122.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\id_rsa Username:docker}
	I0116 02:50:10.824859    6604 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7350743s)
	I0116 02:50:10.824919    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0116 02:50:10.825333    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 02:50:10.865435    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0116 02:50:10.865817    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 02:50:10.912413    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0116 02:50:10.912413    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 02:50:10.958414    6604 provision.go:86] duration metric: configureAuth took 14.5555797s
	I0116 02:50:10.958414    6604 buildroot.go:189] setting minikube options for container-runtime
	I0116 02:50:10.959417    6604 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 02:50:10.959417    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:50:13.100743    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:50:13.100864    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:13.100864    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:50:15.675966    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:50:15.675966    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:15.682943    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:50:15.683768    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.122.78 22 <nil> <nil>}
	I0116 02:50:15.683768    6604 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0116 02:50:15.841852    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0116 02:50:15.841966    6604 buildroot.go:70] root file system type: tmpfs
	I0116 02:50:15.842197    6604 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0116 02:50:15.842290    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:50:17.965915    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:50:17.965915    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:17.965995    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:50:20.446349    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:50:20.446590    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:20.451181    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:50:20.452037    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.122.78 22 <nil> <nil>}
	I0116 02:50:20.452037    6604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.112.69"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0116 02:50:20.630259    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.112.69
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0116 02:50:20.630417    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:50:22.733387    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:50:22.733653    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:22.733653    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:50:25.243401    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:50:25.243401    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:25.249273    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:50:25.250009    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.122.78 22 <nil> <nil>}
	I0116 02:50:25.250009    6604 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0116 02:50:26.260864    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0116 02:50:26.260864    6604 machine.go:91] provisioned docker machine in 39.4986448s
	I0116 02:50:26.260864    6604 client.go:171] LocalClient.Create took 1m49.5448128s
	I0116 02:50:26.260864    6604 start.go:167] duration metric: libmachine.API.Create for "multinode-853900" took 1m49.5448128s
	I0116 02:50:26.260864    6604 start.go:300] post-start starting for "multinode-853900-m02" (driver="hyperv")
	I0116 02:50:26.260864    6604 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 02:50:26.276740    6604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 02:50:26.276740    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:50:28.386633    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:50:28.386633    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:28.386866    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:50:30.899463    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:50:30.899523    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:30.899523    6604 sshutil.go:53] new ssh client: &{IP:172.27.122.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\id_rsa Username:docker}
	I0116 02:50:31.011025    6604 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7337686s)
	I0116 02:50:31.025208    6604 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 02:50:31.031510    6604 command_runner.go:130] > NAME=Buildroot
	I0116 02:50:31.032080    6604 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 02:50:31.032080    6604 command_runner.go:130] > ID=buildroot
	I0116 02:50:31.032080    6604 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 02:50:31.032080    6604 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 02:50:31.032329    6604 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 02:50:31.032466    6604 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0116 02:50:31.033016    6604 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0116 02:50:31.034109    6604 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> 135082.pem in /etc/ssl/certs
	I0116 02:50:31.034164    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> /etc/ssl/certs/135082.pem
	I0116 02:50:31.047741    6604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 02:50:31.064705    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem --> /etc/ssl/certs/135082.pem (1708 bytes)
	I0116 02:50:31.104563    6604 start.go:303] post-start completed in 4.8436677s
	I0116 02:50:31.107350    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:50:33.267698    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:50:33.268066    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:33.268131    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:50:35.858868    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:50:35.858868    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:35.858868    6604 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json ...
	I0116 02:50:35.862164    6604 start.go:128] duration metric: createHost completed in 1m59.1470168s
	I0116 02:50:35.862164    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:50:38.004767    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:50:38.004767    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:38.004998    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:50:40.546682    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:50:40.546682    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:40.553005    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:50:40.553991    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.122.78 22 <nil> <nil>}
	I0116 02:50:40.553991    6604 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 02:50:40.708175    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705373440.708936450
	
	I0116 02:50:40.708175    6604 fix.go:206] guest clock: 1705373440.708936450
	I0116 02:50:40.708175    6604 fix.go:219] Guest: 2024-01-16 02:50:40.70893645 +0000 UTC Remote: 2024-01-16 02:50:35.8621643 +0000 UTC m=+324.876103401 (delta=4.84677215s)
	I0116 02:50:40.708175    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:50:42.894598    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:50:42.894598    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:42.894598    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:50:45.422999    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:50:45.423321    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:45.428942    6604 main.go:141] libmachine: Using SSH client type: native
	I0116 02:50:45.429552    6604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.122.78 22 <nil> <nil>}
	I0116 02:50:45.429552    6604 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705373440
	I0116 02:50:45.594657    6604 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan 16 02:50:40 UTC 2024
	
	I0116 02:50:45.594657    6604 fix.go:226] clock set: Tue Jan 16 02:50:40 UTC 2024
	 (err=<nil>)
	I0116 02:50:45.594657    6604 start.go:83] releasing machines lock for "multinode-853900-m02", held for 2m8.879447s
	I0116 02:50:45.595437    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:50:47.731038    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:50:47.731038    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:47.731111    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:50:50.273108    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:50:50.273108    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:50.273591    6604 out.go:177] * Found network options:
	I0116 02:50:50.274628    6604 out.go:177]   - NO_PROXY=172.27.112.69
	W0116 02:50:50.275822    6604 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 02:50:50.276462    6604 out.go:177]   - NO_PROXY=172.27.112.69
	W0116 02:50:50.277144    6604 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 02:50:50.278545    6604 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 02:50:50.281800    6604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 02:50:50.281938    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:50:50.293196    6604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 02:50:50.293196    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 02:50:52.472265    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:50:52.472265    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:50:52.472453    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:52.472453    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:52.472453    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:50:52.472453    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 02:50:55.163950    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:50:55.163950    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:55.164215    6604 sshutil.go:53] new ssh client: &{IP:172.27.122.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\id_rsa Username:docker}
	I0116 02:50:55.188494    6604 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 02:50:55.188494    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:50:55.188652    6604 sshutil.go:53] new ssh client: &{IP:172.27.122.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\id_rsa Username:docker}
	I0116 02:50:55.281497    6604 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0116 02:50:55.282029    6604 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9888s)
	W0116 02:50:55.282029    6604 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 02:50:55.296844    6604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 02:50:55.363071    6604 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 02:50:55.363071    6604 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0116 02:50:55.363295    6604 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 02:50:55.363366    6604 start.go:475] detecting cgroup driver to use...
	I0116 02:50:55.363071    6604 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0812382s)
	I0116 02:50:55.363366    6604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:50:55.397484    6604 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0116 02:50:55.413226    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0116 02:50:55.446899    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 02:50:55.465075    6604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0116 02:50:55.480227    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0116 02:50:55.511345    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 02:50:55.544021    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 02:50:55.573366    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 02:50:55.602909    6604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 02:50:55.635295    6604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 02:50:55.667337    6604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 02:50:55.683289    6604 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 02:50:55.695811    6604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 02:50:55.725798    6604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:50:55.888975    6604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 02:50:55.918253    6604 start.go:475] detecting cgroup driver to use...
	I0116 02:50:55.933055    6604 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0116 02:50:55.959076    6604 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0116 02:50:55.959076    6604 command_runner.go:130] > [Unit]
	I0116 02:50:55.959076    6604 command_runner.go:130] > Description=Docker Application Container Engine
	I0116 02:50:55.959197    6604 command_runner.go:130] > Documentation=https://docs.docker.com
	I0116 02:50:55.959197    6604 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0116 02:50:55.959197    6604 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0116 02:50:55.959197    6604 command_runner.go:130] > StartLimitBurst=3
	I0116 02:50:55.959197    6604 command_runner.go:130] > StartLimitIntervalSec=60
	I0116 02:50:55.959197    6604 command_runner.go:130] > [Service]
	I0116 02:50:55.959197    6604 command_runner.go:130] > Type=notify
	I0116 02:50:55.959197    6604 command_runner.go:130] > Restart=on-failure
	I0116 02:50:55.959197    6604 command_runner.go:130] > Environment=NO_PROXY=172.27.112.69
	I0116 02:50:55.959268    6604 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0116 02:50:55.959268    6604 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0116 02:50:55.959268    6604 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0116 02:50:55.959268    6604 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0116 02:50:55.959268    6604 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0116 02:50:55.959268    6604 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0116 02:50:55.959459    6604 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0116 02:50:55.959459    6604 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0116 02:50:55.959459    6604 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0116 02:50:55.959459    6604 command_runner.go:130] > ExecStart=
	I0116 02:50:55.959459    6604 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0116 02:50:55.959459    6604 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0116 02:50:55.959459    6604 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0116 02:50:55.959605    6604 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0116 02:50:55.959605    6604 command_runner.go:130] > LimitNOFILE=infinity
	I0116 02:50:55.959605    6604 command_runner.go:130] > LimitNPROC=infinity
	I0116 02:50:55.959605    6604 command_runner.go:130] > LimitCORE=infinity
	I0116 02:50:55.959605    6604 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0116 02:50:55.959605    6604 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0116 02:50:55.959605    6604 command_runner.go:130] > TasksMax=infinity
	I0116 02:50:55.959605    6604 command_runner.go:130] > TimeoutStartSec=0
	I0116 02:50:55.959605    6604 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0116 02:50:55.959605    6604 command_runner.go:130] > Delegate=yes
	I0116 02:50:55.959605    6604 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0116 02:50:55.959605    6604 command_runner.go:130] > KillMode=process
	I0116 02:50:55.959605    6604 command_runner.go:130] > [Install]
	I0116 02:50:55.959605    6604 command_runner.go:130] > WantedBy=multi-user.target
	I0116 02:50:55.974933    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:50:56.005932    6604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 02:50:56.044043    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 02:50:56.079542    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 02:50:56.113159    6604 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 02:50:56.168709    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 02:50:56.187406    6604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 02:50:56.218057    6604 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0116 02:50:56.232087    6604 ssh_runner.go:195] Run: which cri-dockerd
	I0116 02:50:56.240128    6604 command_runner.go:130] > /usr/bin/cri-dockerd
	I0116 02:50:56.254215    6604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0116 02:50:56.271616    6604 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0116 02:50:56.312746    6604 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0116 02:50:56.493195    6604 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0116 02:50:56.651311    6604 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0116 02:50:56.651400    6604 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0116 02:50:56.692054    6604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:50:56.862274    6604 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0116 02:50:58.401968    6604 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5395691s)
	I0116 02:50:58.416551    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0116 02:50:58.450203    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0116 02:50:58.480848    6604 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0116 02:50:58.651428    6604 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0116 02:50:58.822040    6604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:50:58.985025    6604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0116 02:50:59.023993    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0116 02:50:59.057974    6604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 02:50:59.228145    6604 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0116 02:50:59.343708    6604 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0116 02:50:59.357704    6604 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0116 02:50:59.364999    6604 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0116 02:50:59.365056    6604 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 02:50:59.365056    6604 command_runner.go:130] > Device: 16h/22d	Inode: 915         Links: 1
	I0116 02:50:59.365056    6604 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0116 02:50:59.365056    6604 command_runner.go:130] > Access: 2024-01-16 02:50:59.250565621 +0000
	I0116 02:50:59.365168    6604 command_runner.go:130] > Modify: 2024-01-16 02:50:59.250565621 +0000
	I0116 02:50:59.365168    6604 command_runner.go:130] > Change: 2024-01-16 02:50:59.254565621 +0000
	I0116 02:50:59.365168    6604 command_runner.go:130] >  Birth: -
	I0116 02:50:59.365747    6604 start.go:543] Will wait 60s for crictl version
	I0116 02:50:59.379690    6604 ssh_runner.go:195] Run: which crictl
	I0116 02:50:59.385162    6604 command_runner.go:130] > /usr/bin/crictl
	I0116 02:50:59.401669    6604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 02:50:59.482760    6604 command_runner.go:130] > Version:  0.1.0
	I0116 02:50:59.482760    6604 command_runner.go:130] > RuntimeName:  docker
	I0116 02:50:59.483502    6604 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0116 02:50:59.483502    6604 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 02:50:59.483651    6604 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0116 02:50:59.494567    6604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0116 02:50:59.528195    6604 command_runner.go:130] > 24.0.7
	I0116 02:50:59.540037    6604 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0116 02:50:59.573041    6604 command_runner.go:130] > 24.0.7
	I0116 02:50:59.574042    6604 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0116 02:50:59.574042    6604 out.go:177]   - env NO_PROXY=172.27.112.69
	I0116 02:50:59.576903    6604 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0116 02:50:59.581046    6604 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0116 02:50:59.581046    6604 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0116 02:50:59.581046    6604 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0116 02:50:59.581046    6604 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:4e:7e Flags:up|broadcast|multicast|running}
	I0116 02:50:59.584040    6604 ip.go:210] interface addr: fe80::d699:fcba:3e2b:1549/64
	I0116 02:50:59.584040    6604 ip.go:210] interface addr: 172.27.112.1/20
	I0116 02:50:59.598049    6604 ssh_runner.go:195] Run: grep 172.27.112.1	host.minikube.internal$ /etc/hosts
	I0116 02:50:59.603728    6604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:50:59.621408    6604 certs.go:56] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900 for IP: 172.27.122.78
	I0116 02:50:59.621408    6604 certs.go:190] acquiring lock for shared ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 02:50:59.623406    6604 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0116 02:50:59.623406    6604 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0116 02:50:59.623406    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 02:50:59.623406    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0116 02:50:59.624642    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 02:50:59.624642    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 02:50:59.625434    6604 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem (1338 bytes)
	W0116 02:50:59.625434    6604 certs.go:433] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508_empty.pem, impossibly tiny 0 bytes
	I0116 02:50:59.625434    6604 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0116 02:50:59.625434    6604 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0116 02:50:59.626415    6604 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0116 02:50:59.626415    6604 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0116 02:50:59.626415    6604 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem (1708 bytes)
	I0116 02:50:59.626415    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem -> /usr/share/ca-certificates/13508.pem
	I0116 02:50:59.627450    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> /usr/share/ca-certificates/135082.pem
	I0116 02:50:59.627450    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:50:59.627450    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 02:50:59.667985    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 02:50:59.707367    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 02:50:59.744050    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 02:50:59.785007    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem --> /usr/share/ca-certificates/13508.pem (1338 bytes)
	I0116 02:50:59.824330    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem --> /usr/share/ca-certificates/135082.pem (1708 bytes)
	I0116 02:50:59.862511    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 02:50:59.918095    6604 ssh_runner.go:195] Run: openssl version
	I0116 02:50:59.925103    6604 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 02:50:59.937090    6604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135082.pem && ln -fs /usr/share/ca-certificates/135082.pem /etc/ssl/certs/135082.pem"
	I0116 02:50:59.968135    6604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135082.pem
	I0116 02:50:59.974869    6604 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 01:53 /usr/share/ca-certificates/135082.pem
	I0116 02:50:59.975174    6604 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 01:53 /usr/share/ca-certificates/135082.pem
	I0116 02:50:59.987760    6604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135082.pem
	I0116 02:50:59.996681    6604 command_runner.go:130] > 3ec20f2e
	I0116 02:51:00.011097    6604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/135082.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 02:51:00.042621    6604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 02:51:00.076025    6604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:51:00.082160    6604 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 01:40 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:51:00.082318    6604 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 01:40 /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:51:00.095297    6604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 02:51:00.104202    6604 command_runner.go:130] > b5213941
	I0116 02:51:00.116697    6604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 02:51:00.147306    6604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13508.pem && ln -fs /usr/share/ca-certificates/13508.pem /etc/ssl/certs/13508.pem"
	I0116 02:51:00.174650    6604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13508.pem
	I0116 02:51:00.183351    6604 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 01:53 /usr/share/ca-certificates/13508.pem
	I0116 02:51:00.183407    6604 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 01:53 /usr/share/ca-certificates/13508.pem
	I0116 02:51:00.200898    6604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13508.pem
	I0116 02:51:00.209118    6604 command_runner.go:130] > 51391683
	I0116 02:51:00.223863    6604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13508.pem /etc/ssl/certs/51391683.0"
	I0116 02:51:00.254192    6604 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 02:51:00.259765    6604 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:51:00.260679    6604 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 02:51:00.271816    6604 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0116 02:51:00.310087    6604 command_runner.go:130] > cgroupfs
	I0116 02:51:00.310087    6604 cni.go:84] Creating CNI manager for ""
	I0116 02:51:00.310087    6604 cni.go:136] 2 nodes found, recommending kindnet
	I0116 02:51:00.310087    6604 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 02:51:00.310087    6604 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.122.78 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-853900 NodeName:multinode-853900-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.112.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.122.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 02:51:00.310635    6604 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.122.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-853900-m02"
	  kubeletExtraArgs:
	    node-ip: 172.27.122.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.112.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 02:51:00.310818    6604 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-853900-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.122.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 02:51:00.325995    6604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 02:51:00.340561    6604 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I0116 02:51:00.340744    6604 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0116 02:51:00.354089    6604 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0116 02:51:00.371259    6604 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet
	I0116 02:51:00.371259    6604 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm
	I0116 02:51:00.371259    6604 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl
	I0116 02:51:01.912469    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0116 02:51:01.925457    6604 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0116 02:51:01.930355    6604 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0116 02:51:01.931202    6604 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0116 02:51:01.931202    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0116 02:51:02.777849    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0116 02:51:02.791700    6604 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0116 02:51:02.798892    6604 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0116 02:51:02.800484    6604 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0116 02:51:02.800659    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0116 02:51:04.343115    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:51:04.364596    6604 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0116 02:51:04.378853    6604 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0116 02:51:04.384938    6604 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0116 02:51:04.385530    6604 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0116 02:51:04.385820    6604 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0116 02:51:04.997297    6604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 02:51:05.011302    6604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 02:51:05.038603    6604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 02:51:05.081753    6604 ssh_runner.go:195] Run: grep 172.27.112.69	control-plane.minikube.internal$ /etc/hosts
	I0116 02:51:05.087511    6604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.112.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 02:51:05.107010    6604 host.go:66] Checking if "multinode-853900" exists ...
	I0116 02:51:05.107632    6604 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 02:51:05.107807    6604 start.go:304] JoinCluster: &{Name:multinode-853900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.112.69 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.122.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 02:51:05.107807    6604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 02:51:05.107807    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 02:51:07.206637    6604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 02:51:07.206637    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:51:07.206769    6604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 02:51:09.769942    6604 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 02:51:09.770255    6604 main.go:141] libmachine: [stderr =====>] : 
	I0116 02:51:09.770328    6604 sshutil.go:53] new ssh client: &{IP:172.27.112.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 02:51:09.983345    6604 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7iq2m1.p19dvgdwus4rdg83 --discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 
	I0116 02:51:09.984346    6604 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8765072s)
	I0116 02:51:09.984346    6604 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.27.122.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0116 02:51:09.984346    6604 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7iq2m1.p19dvgdwus4rdg83 --discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-853900-m02"
	I0116 02:51:10.051542    6604 command_runner.go:130] ! W0116 02:51:10.054097    1360 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0116 02:51:10.257878    6604 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 02:51:12.035117    6604 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 02:51:12.035117    6604 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 02:51:12.035117    6604 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 02:51:12.035117    6604 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 02:51:12.035117    6604 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 02:51:12.035117    6604 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 02:51:12.035117    6604 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 02:51:12.035117    6604 command_runner.go:130] > This node has joined the cluster:
	I0116 02:51:12.035117    6604 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 02:51:12.035117    6604 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 02:51:12.035117    6604 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 02:51:12.035117    6604 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7iq2m1.p19dvgdwus4rdg83 --discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-853900-m02": (2.0507581s)
	I0116 02:51:12.035117    6604 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 02:51:12.235121    6604 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0116 02:51:12.424117    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=multinode-853900 minikube.k8s.io/updated_at=2024_01_16T02_51_12_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 02:51:12.625377    6604 command_runner.go:130] > node/multinode-853900-m02 labeled
	I0116 02:51:12.628059    6604 start.go:306] JoinCluster complete in 7.520284s
	I0116 02:51:12.628166    6604 cni.go:84] Creating CNI manager for ""
	I0116 02:51:12.628246    6604 cni.go:136] 2 nodes found, recommending kindnet
	I0116 02:51:12.645125    6604 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 02:51:12.652657    6604 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 02:51:12.652986    6604 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 02:51:12.652986    6604 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 02:51:12.652986    6604 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 02:51:12.653063    6604 command_runner.go:130] > Access: 2024-01-16 02:46:20.640697600 +0000
	I0116 02:51:12.653063    6604 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 02:51:12.653063    6604 command_runner.go:130] > Change: 2024-01-16 02:46:11.185000000 +0000
	I0116 02:51:12.653063    6604 command_runner.go:130] >  Birth: -
	I0116 02:51:12.653711    6604 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 02:51:12.653711    6604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 02:51:12.709371    6604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 02:51:13.141862    6604 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:51:13.142736    6604 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 02:51:13.142736    6604 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 02:51:13.142810    6604 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 02:51:13.143808    6604 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 02:51:13.144186    6604 kapi.go:59] client config for multinode-853900: &rest.Config{Host:"https://172.27.112.69:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x270c520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:51:13.145734    6604 round_trippers.go:463] GET https://172.27.112.69:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 02:51:13.145770    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:13.145770    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:13.145830    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:13.163153    6604 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0116 02:51:13.163153    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:13.163698    6604 round_trippers.go:580]     Audit-Id: 5dd0e22a-61b0-4242-8ba5-48e1c0d708a8
	I0116 02:51:13.163698    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:13.163698    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:13.163698    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:13.163698    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:13.163698    6604 round_trippers.go:580]     Content-Length: 291
	I0116 02:51:13.163698    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:13 GMT
	I0116 02:51:13.163698    6604 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bb9e4be9-a821-417a-b943-b930d6cec07c","resourceVersion":"414","creationTimestamp":"2024-01-16T02:48:09Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 02:51:13.163941    6604 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-853900" context rescaled to 1 replicas
	I0116 02:51:13.163967    6604 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.27.122.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0116 02:51:13.165051    6604 out.go:177] * Verifying Kubernetes components...
	I0116 02:51:13.181289    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:51:13.204345    6604 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 02:51:13.205223    6604 kapi.go:59] client config for multinode-853900: &rest.Config{Host:"https://172.27.112.69:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x270c520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 02:51:13.205983    6604 node_ready.go:35] waiting up to 6m0s for node "multinode-853900-m02" to be "Ready" ...
	I0116 02:51:13.205983    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:13.205983    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:13.205983    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:13.205983    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:13.209762    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:13.209762    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:13.209762    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:13 GMT
	I0116 02:51:13.209762    6604 round_trippers.go:580]     Audit-Id: e8828bab-e0d4-48a5-8002-5a003458c79f
	I0116 02:51:13.209762    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:13.209762    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:13.209762    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:13.209762    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:13.209762    6604 round_trippers.go:580]     Content-Length: 3913
	I0116 02:51:13.210390    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"571","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 2889 chars]
	I0116 02:51:13.710373    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:13.710373    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:13.710373    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:13.710373    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:13.714455    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:13.714455    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:13.714455    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:13 GMT
	I0116 02:51:13.714455    6604 round_trippers.go:580]     Audit-Id: b571b532-7a3b-40d1-a47b-f88646c7ee89
	I0116 02:51:13.714455    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:13.714455    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:13.714455    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:13.714455    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:13.714455    6604 round_trippers.go:580]     Content-Length: 3913
	I0116 02:51:13.714455    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"571","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 2889 chars]
	I0116 02:51:14.206576    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:14.206576    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:14.206576    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:14.206576    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:14.211125    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:14.211209    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:14.211209    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:14.211209    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:14.211289    6604 round_trippers.go:580]     Content-Length: 3913
	I0116 02:51:14.211289    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:14 GMT
	I0116 02:51:14.211289    6604 round_trippers.go:580]     Audit-Id: 243660b4-b8c4-4797-9939-11fae48ef922
	I0116 02:51:14.211289    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:14.211360    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:14.211501    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"571","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 2889 chars]
	I0116 02:51:14.710178    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:14.710249    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:14.710249    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:14.710249    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:14.714766    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:14.714766    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:14.714766    6604 round_trippers.go:580]     Content-Length: 3913
	I0116 02:51:14.715201    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:14 GMT
	I0116 02:51:14.715201    6604 round_trippers.go:580]     Audit-Id: 753075d1-75ee-432b-b661-b1663ff0942c
	I0116 02:51:14.715201    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:14.715201    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:14.715201    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:14.715201    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:14.715422    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"571","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 2889 chars]
	I0116 02:51:15.217170    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:15.217249    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:15.217249    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:15.217249    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:15.220522    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:15.221187    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:15.221187    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:15.221254    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:15.221254    6604 round_trippers.go:580]     Content-Length: 3913
	I0116 02:51:15.221254    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:15 GMT
	I0116 02:51:15.221254    6604 round_trippers.go:580]     Audit-Id: 4b1e0650-08c1-4d0d-9c9a-3f36225c8f33
	I0116 02:51:15.221254    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:15.221254    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:15.221254    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"571","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:met
adata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec" [truncated 2889 chars]
	I0116 02:51:15.222098    6604 node_ready.go:58] node "multinode-853900-m02" has status "Ready":"False"
	I0116 02:51:15.720228    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:15.720228    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:15.720228    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:15.720228    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:15.724816    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:15.724816    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:15.724816    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:15 GMT
	I0116 02:51:15.725330    6604 round_trippers.go:580]     Audit-Id: 127740f2-b435-41e1-87ea-9f41112d4e45
	I0116 02:51:15.725330    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:15.725330    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:15.725330    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:15.725330    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:15.725330    6604 round_trippers.go:580]     Content-Length: 4022
	I0116 02:51:15.725565    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"576","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 2998 chars]
	I0116 02:51:16.210254    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:16.210314    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:16.210314    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:16.210426    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:16.215001    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:16.215841    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:16.215841    6604 round_trippers.go:580]     Audit-Id: 22605c89-5a33-45e5-999f-d1408abff57b
	I0116 02:51:16.215841    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:16.215841    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:16.215841    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:16.215841    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:16.215841    6604 round_trippers.go:580]     Content-Length: 4022
	I0116 02:51:16.215920    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:16 GMT
	I0116 02:51:16.216117    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"576","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 2998 chars]
	I0116 02:51:16.720447    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:16.720494    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:16.720539    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:16.720539    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:16.724994    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:16.725114    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:16.725114    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:16.725114    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:16.725114    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:16.725201    6604 round_trippers.go:580]     Content-Length: 4022
	I0116 02:51:16.725201    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:16 GMT
	I0116 02:51:16.725201    6604 round_trippers.go:580]     Audit-Id: c40fbe97-0830-481a-a8ef-fbfb19e54a4d
	I0116 02:51:16.725201    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:16.725466    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"576","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 2998 chars]
	I0116 02:51:17.210917    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:17.210917    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:17.210917    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:17.210917    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:17.214561    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:17.214561    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:17.214561    6604 round_trippers.go:580]     Content-Length: 4022
	I0116 02:51:17.214561    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:17 GMT
	I0116 02:51:17.215337    6604 round_trippers.go:580]     Audit-Id: a0feb63a-9307-4788-9231-a50b6213e534
	I0116 02:51:17.215337    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:17.215337    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:17.215337    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:17.215337    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:17.215454    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"576","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 2998 chars]
	I0116 02:51:17.720333    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:17.720396    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:17.720396    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:17.720453    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:17.723729    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:17.724270    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:17.724270    6604 round_trippers.go:580]     Audit-Id: 31dde41c-a592-4e38-ba55-757fd0bc93e3
	I0116 02:51:17.724270    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:17.724270    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:17.724270    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:17.724270    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:17.724270    6604 round_trippers.go:580]     Content-Length: 4022
	I0116 02:51:17.724359    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:17 GMT
	I0116 02:51:17.724545    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"576","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 2998 chars]
	I0116 02:51:17.724894    6604 node_ready.go:58] node "multinode-853900-m02" has status "Ready":"False"
	I0116 02:51:18.211068    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:18.211359    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:18.211405    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:18.211405    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:18.215169    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:18.215169    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:18.215238    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:18.215238    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:18.215238    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:18.215238    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:18.215238    6604 round_trippers.go:580]     Content-Length: 4022
	I0116 02:51:18.215299    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:18 GMT
	I0116 02:51:18.215299    6604 round_trippers.go:580]     Audit-Id: 5f0686d5-3018-4912-926e-430ed8b3d46a
	I0116 02:51:18.215351    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"576","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 2998 chars]
	I0116 02:51:18.719878    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:18.719938    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:18.719938    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:18.719938    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:18.723984    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:18.723984    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:18.723984    6604 round_trippers.go:580]     Content-Length: 4022
	I0116 02:51:18.723984    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:18 GMT
	I0116 02:51:18.723984    6604 round_trippers.go:580]     Audit-Id: 392a6e10-13d1-44a0-9e15-54fbda5d1365
	I0116 02:51:18.723984    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:18.723984    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:18.723984    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:18.723984    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:18.723984    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"576","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 2998 chars]
	I0116 02:51:19.209686    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:19.209686    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:19.209686    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:19.209686    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:19.214024    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:19.214024    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:19.214024    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:19.214024    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:19.214024    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:19.214024    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:19.214024    6604 round_trippers.go:580]     Content-Length: 4022
	I0116 02:51:19.214024    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:19 GMT
	I0116 02:51:19.214024    6604 round_trippers.go:580]     Audit-Id: 3be4c790-8375-49e0-834f-d3db5c3566be
	I0116 02:51:19.214526    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"576","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 2998 chars]
	I0116 02:51:19.714197    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:19.714197    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:19.714197    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:19.714197    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:19.721867    6604 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 02:51:19.722375    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:19.722375    6604 round_trippers.go:580]     Content-Length: 4022
	I0116 02:51:19.722375    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:19 GMT
	I0116 02:51:19.722375    6604 round_trippers.go:580]     Audit-Id: f011f2fb-be1a-4445-bce6-4db3aaa5dfe7
	I0116 02:51:19.722375    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:19.722375    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:19.722375    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:19.722375    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:19.722626    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"576","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 2998 chars]
	I0116 02:51:20.219764    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:20.219853    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:20.219853    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:20.219853    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:20.223552    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:20.223811    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:20.223811    6604 round_trippers.go:580]     Content-Length: 4022
	I0116 02:51:20.223811    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:20 GMT
	I0116 02:51:20.223811    6604 round_trippers.go:580]     Audit-Id: 11b2d017-f3c9-477d-87ad-f533d2503107
	I0116 02:51:20.223811    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:20.223811    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:20.223811    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:20.223811    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:20.223811    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"576","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 2998 chars]
	I0116 02:51:20.224361    6604 node_ready.go:58] node "multinode-853900-m02" has status "Ready":"False"
	I0116 02:51:20.727150    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:20.727150    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:20.727150    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:20.727150    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:20.731771    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:20.731771    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:20.731771    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:20.731771    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:20.731771    6604 round_trippers.go:580]     Content-Length: 4022
	I0116 02:51:20.731771    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:20 GMT
	I0116 02:51:20.731771    6604 round_trippers.go:580]     Audit-Id: e2543dde-ed5d-4594-b0e4-3ecee3c7c676
	I0116 02:51:20.731771    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:20.731771    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:20.731771    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"576","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 2998 chars]
	I0116 02:51:21.221544    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:21.221638    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:21.221638    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:21.221638    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:21.225538    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:21.225538    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:21.225538    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:21.225538    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:21.225538    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:21 GMT
	I0116 02:51:21.225538    6604 round_trippers.go:580]     Audit-Id: f4291d71-ec66-4eb0-ac7e-0f4ccb697df4
	I0116 02:51:21.225538    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:21.225538    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:21.226545    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:21.714566    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:21.714795    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:21.714795    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:21.714795    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:21.718866    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:21.719116    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:21.719116    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:21.719116    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:21.719191    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:21.719191    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:21.719191    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:21 GMT
	I0116 02:51:21.719229    6604 round_trippers.go:580]     Audit-Id: 81a1a09c-bf6f-44b4-be17-c48f57c9b2d1
	I0116 02:51:21.719229    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:22.209776    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:22.209776    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:22.209776    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:22.209776    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:22.215004    6604 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 02:51:22.215636    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:22.215636    6604 round_trippers.go:580]     Audit-Id: cfcab3e1-5e30-4f5f-8391-f2cb6ccd7faf
	I0116 02:51:22.215636    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:22.215636    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:22.215636    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:22.215636    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:22.215791    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:22 GMT
	I0116 02:51:22.216031    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:22.716438    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:22.716662    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:22.716750    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:22.716750    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:22.720064    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:22.720064    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:22.720064    6604 round_trippers.go:580]     Audit-Id: 9663d745-6181-4c99-b6d5-0bccd35aedee
	I0116 02:51:22.720064    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:22.720064    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:22.720064    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:22.720064    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:22.720064    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:22 GMT
	I0116 02:51:22.720064    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:22.721086    6604 node_ready.go:58] node "multinode-853900-m02" has status "Ready":"False"
	I0116 02:51:23.209618    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:23.209618    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:23.209618    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:23.209618    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:23.219421    6604 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0116 02:51:23.219522    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:23.219522    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:23.219522    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:23.219522    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:23.219522    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:23.219522    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:23 GMT
	I0116 02:51:23.219522    6604 round_trippers.go:580]     Audit-Id: 80b51a34-cea0-46f8-aed5-6877879eb4ff
	I0116 02:51:23.219522    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:23.718284    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:23.718362    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:23.718417    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:23.718417    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:23.726169    6604 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 02:51:23.726169    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:23.726169    6604 round_trippers.go:580]     Audit-Id: 5b208a2a-ff0a-41ce-8e37-3f1381983b33
	I0116 02:51:23.726169    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:23.726169    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:23.726169    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:23.726169    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:23.726169    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:23 GMT
	I0116 02:51:23.726169    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:24.221210    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:24.221210    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:24.221210    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:24.221210    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:24.224807    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:24.224807    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:24.225691    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:24.225691    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:24.225691    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:24 GMT
	I0116 02:51:24.225691    6604 round_trippers.go:580]     Audit-Id: fdd96cda-acd1-4530-a601-dc841b8a0d31
	I0116 02:51:24.225691    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:24.225691    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:24.226017    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:24.709206    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:24.709206    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:24.709206    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:24.709206    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:24.711801    6604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:51:24.712801    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:24.712801    6604 round_trippers.go:580]     Audit-Id: 21fa9fa5-4e50-419b-a37d-41cd5b4af979
	I0116 02:51:24.712801    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:24.712801    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:24.712801    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:24.712801    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:24.712801    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:24 GMT
	I0116 02:51:24.712801    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:25.212286    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:25.212473    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:25.212473    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:25.212473    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:25.215884    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:25.215884    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:25.216518    6604 round_trippers.go:580]     Audit-Id: 506082c7-e5d0-4083-8426-59cf2dd1cfe6
	I0116 02:51:25.216518    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:25.216518    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:25.216518    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:25.216518    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:25.216518    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:25 GMT
	I0116 02:51:25.216858    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:25.217354    6604 node_ready.go:58] node "multinode-853900-m02" has status "Ready":"False"
	I0116 02:51:25.715195    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:25.715195    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:25.715195    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:25.715195    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:25.719698    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:25.719698    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:25.719755    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:25.719755    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:25.719755    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:25 GMT
	I0116 02:51:25.719755    6604 round_trippers.go:580]     Audit-Id: 27d4e646-fc80-4d70-9d6e-eec229cfedfe
	I0116 02:51:25.719755    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:25.719755    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:25.719977    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:26.215977    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:26.216080    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:26.216080    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:26.216080    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:26.228836    6604 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0116 02:51:26.228836    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:26.228836    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:26.228836    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:26.228836    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:26.228836    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:26 GMT
	I0116 02:51:26.228836    6604 round_trippers.go:580]     Audit-Id: 742c49f3-940b-439e-b945-16fc2509f075
	I0116 02:51:26.228836    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:26.228836    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:26.715680    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:26.715787    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:26.715787    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:26.715787    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:26.720281    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:26.720333    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:26.720333    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:26.720333    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:26.720411    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:26 GMT
	I0116 02:51:26.720411    6604 round_trippers.go:580]     Audit-Id: 9fb64906-b092-432d-b4c6-260f5de638d7
	I0116 02:51:26.720411    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:26.720411    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:26.720565    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:27.216558    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:27.216658    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:27.216658    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:27.216658    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:27.220551    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:27.220551    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:27.220551    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:27.220551    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:27.220551    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:27.221313    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:27 GMT
	I0116 02:51:27.221313    6604 round_trippers.go:580]     Audit-Id: c8f106c2-fe6a-427c-9c26-1fbb97894136
	I0116 02:51:27.221313    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:27.221364    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:27.221921    6604 node_ready.go:58] node "multinode-853900-m02" has status "Ready":"False"
	I0116 02:51:27.717420    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:27.717518    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:27.717518    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:27.717518    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:27.722033    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:27.722033    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:27.722033    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:27.722033    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:27 GMT
	I0116 02:51:27.722033    6604 round_trippers.go:580]     Audit-Id: b018471e-bc12-434d-8d66-29232407610d
	I0116 02:51:27.722033    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:27.722033    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:27.722033    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:27.722033    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:28.216254    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:28.216254    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:28.216254    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:28.216254    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:28.220907    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:28.221787    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:28.221920    6604 round_trippers.go:580]     Audit-Id: 7316bb97-bbf4-46bb-9742-ae2d954e0f17
	I0116 02:51:28.221920    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:28.221920    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:28.221920    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:28.221920    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:28.221920    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:28 GMT
	I0116 02:51:28.221920    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:28.717230    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:28.717310    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:28.717310    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:28.717310    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:28.724416    6604 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 02:51:28.724416    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:28.724416    6604 round_trippers.go:580]     Audit-Id: 39b69121-c2ad-4daf-abc8-e1f3ad9ad247
	I0116 02:51:28.724416    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:28.724416    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:28.724416    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:28.724995    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:28.724995    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:28 GMT
	I0116 02:51:28.725288    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:29.218584    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:29.218692    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:29.218692    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:29.218780    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:29.225211    6604 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 02:51:29.225211    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:29.225211    6604 round_trippers.go:580]     Audit-Id: f3418dce-3ed1-41fd-9150-fecc8d82288b
	I0116 02:51:29.225211    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:29.225211    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:29.225211    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:29.225211    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:29.225211    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:29 GMT
	I0116 02:51:29.225211    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"590","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3390 chars]
	I0116 02:51:29.226020    6604 node_ready.go:58] node "multinode-853900-m02" has status "Ready":"False"
	I0116 02:51:29.719079    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:29.719174    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:29.719174    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:29.719174    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:29.726218    6604 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 02:51:29.726298    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:29.726298    6604 round_trippers.go:580]     Audit-Id: 8e84d3d9-4103-4ac3-a290-316e2f4db3da
	I0116 02:51:29.726298    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:29.726351    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:29.726351    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:29.726351    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:29.726351    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:29 GMT
	I0116 02:51:29.726461    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"604","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3256 chars]
	I0116 02:51:29.727045    6604 node_ready.go:49] node "multinode-853900-m02" has status "Ready":"True"
	I0116 02:51:29.727045    6604 node_ready.go:38] duration metric: took 16.5209543s waiting for node "multinode-853900-m02" to be "Ready" ...
	I0116 02:51:29.727045    6604 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:51:29.727343    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods
	I0116 02:51:29.727384    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:29.727427    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:29.727427    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:29.732142    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:29.732142    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:29.732142    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:29.732142    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:29.732142    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:29.732142    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:29.732142    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:29 GMT
	I0116 02:51:29.732142    6604 round_trippers.go:580]     Audit-Id: 84deebf5-784b-4c2b-8886-56c1808817fd
	I0116 02:51:29.733954    6604 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"604"},"items":[{"metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"410","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67474 chars]
	I0116 02:51:29.737451    6604 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:29.737594    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-62jpz
	I0116 02:51:29.737622    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:29.737622    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:29.737667    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:29.740877    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:29.740877    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:29.740877    6604 round_trippers.go:580]     Audit-Id: 9916008b-d676-4084-8447-47524061cd4d
	I0116 02:51:29.740877    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:29.740877    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:29.740877    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:29.740877    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:29.740877    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:29 GMT
	I0116 02:51:29.740877    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"410","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0116 02:51:29.740877    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:51:29.740877    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:29.740877    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:29.740877    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:29.743940    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:29.743940    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:29.743940    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:29.743940    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:29.743940    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:29 GMT
	I0116 02:51:29.743940    6604 round_trippers.go:580]     Audit-Id: 0fffa920-442e-4066-be35-6800f748d2fb
	I0116 02:51:29.743940    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:29.743940    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:29.743940    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"420","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0116 02:51:29.745361    6604 pod_ready.go:92] pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace has status "Ready":"True"
	I0116 02:51:29.745361    6604 pod_ready.go:81] duration metric: took 7.8824ms waiting for pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:29.745407    6604 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:29.745503    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-853900
	I0116 02:51:29.745575    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:29.745575    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:29.745575    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:29.747849    6604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:51:29.747849    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:29.747849    6604 round_trippers.go:580]     Audit-Id: 2360cc4c-4060-4a8a-af9b-f367e2f74cd9
	I0116 02:51:29.747849    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:29.747849    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:29.747849    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:29.747849    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:29.747849    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:29 GMT
	I0116 02:51:29.748550    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-853900","namespace":"kube-system","uid":"384c4f82-a0f3-4576-b859-80837d0f109b","resourceVersion":"374","creationTimestamp":"2024-01-16T02:48:09Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.112.69:2379","kubernetes.io/config.hash":"c7b11a574a1c958cf64320e53e2315c6","kubernetes.io/config.mirror":"c7b11a574a1c958cf64320e53e2315c6","kubernetes.io/config.seen":"2024-01-16T02:48:09.211488777Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0116 02:51:29.748957    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:51:29.748957    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:29.748957    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:29.748957    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:29.752658    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:29.752658    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:29.752658    6604 round_trippers.go:580]     Audit-Id: 1d371540-2d90-46db-8d8c-a8c159f7e381
	I0116 02:51:29.752658    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:29.752658    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:29.752658    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:29.752658    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:29.752658    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:29 GMT
	I0116 02:51:29.753489    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"420","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0116 02:51:29.753897    6604 pod_ready.go:92] pod "etcd-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 02:51:29.753981    6604 pod_ready.go:81] duration metric: took 8.5742ms waiting for pod "etcd-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:29.753981    6604 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:29.754068    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-853900
	I0116 02:51:29.754140    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:29.754140    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:29.754140    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:29.759407    6604 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 02:51:29.759407    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:29.759407    6604 round_trippers.go:580]     Audit-Id: 120547bb-195e-44a2-80c7-701e9568c94e
	I0116 02:51:29.759407    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:29.759407    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:29.759407    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:29.759407    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:29.759407    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:29 GMT
	I0116 02:51:29.759407    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-853900","namespace":"kube-system","uid":"a437ff8c-f27b-433b-97ac-dae3d276bc92","resourceVersion":"376","creationTimestamp":"2024-01-16T02:48:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.112.69:8443","kubernetes.io/config.hash":"41ea37f04f983128860ae937c9f060bb","kubernetes.io/config.mirror":"41ea37f04f983128860ae937c9f060bb","kubernetes.io/config.seen":"2024-01-16T02:48:00.146128309Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0116 02:51:29.760311    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:51:29.760311    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:29.760311    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:29.760311    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:29.763336    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:29.763336    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:29.763438    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:29.763438    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:29.763472    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:29.763472    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:29.763472    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:29 GMT
	I0116 02:51:29.763472    6604 round_trippers.go:580]     Audit-Id: 5fdf38ec-6a07-41e0-8520-9441fe535f47
	I0116 02:51:29.763565    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"420","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0116 02:51:29.764299    6604 pod_ready.go:92] pod "kube-apiserver-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 02:51:29.764299    6604 pod_ready.go:81] duration metric: took 10.317ms waiting for pod "kube-apiserver-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:29.764299    6604 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:29.764299    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-853900
	I0116 02:51:29.764299    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:29.764299    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:29.764299    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:29.766871    6604 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 02:51:29.766871    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:29.766871    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:29.767774    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:29.767774    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:29.767774    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:29 GMT
	I0116 02:51:29.767774    6604 round_trippers.go:580]     Audit-Id: 167fab98-b133-4f0f-9d49-a13ca82c14d1
	I0116 02:51:29.767774    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:29.768113    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-853900","namespace":"kube-system","uid":"5a4d4e86-9836-401a-8d98-1519ff75a0ec","resourceVersion":"378","creationTimestamp":"2024-01-16T02:48:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f09e1ab837c9ef5b247e4d57afe8993b","kubernetes.io/config.mirror":"f09e1ab837c9ef5b247e4d57afe8993b","kubernetes.io/config.seen":"2024-01-16T02:48:00.146129509Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0116 02:51:29.768113    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:51:29.768676    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:29.768676    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:29.768676    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:29.771882    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:29.772577    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:29.772577    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:29 GMT
	I0116 02:51:29.772577    6604 round_trippers.go:580]     Audit-Id: 979b37e4-5811-488f-ab73-a58a9541d56e
	I0116 02:51:29.772577    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:29.772577    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:29.772577    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:29.772577    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:29.772577    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"420","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0116 02:51:29.773128    6604 pod_ready.go:92] pod "kube-controller-manager-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 02:51:29.773128    6604 pod_ready.go:81] duration metric: took 8.8298ms waiting for pod "kube-controller-manager-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:29.773327    6604 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h977r" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:29.920542    6604 request.go:629] Waited for 147.0189ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h977r
	I0116 02:51:29.920745    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h977r
	I0116 02:51:29.920745    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:29.920858    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:29.920858    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:29.924274    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:29.925129    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:29.925129    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:29.925129    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:29.925213    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:29.925267    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:29.925267    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:29 GMT
	I0116 02:51:29.925267    6604 round_trippers.go:580]     Audit-Id: 51efc516-baba-41bf-8f2a-52253c950a69
	I0116 02:51:29.925541    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h977r","generateName":"kube-proxy-","namespace":"kube-system","uid":"5434ef27-d483-46c1-a95d-bd86163ee965","resourceVersion":"587","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0116 02:51:30.125921    6604 request.go:629] Waited for 199.8577ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:30.126125    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900-m02
	I0116 02:51:30.126125    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:30.126125    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:30.126125    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:30.129856    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:30.129856    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:30.129856    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:30.129856    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:30.130805    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:30.130805    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:30.130805    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:30 GMT
	I0116 02:51:30.130805    6604 round_trippers.go:580]     Audit-Id: 424b0339-10ce-47f4-8356-36c303f28da3
	I0116 02:51:30.131178    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"604","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T02_51_12_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annot
ations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-deta [truncated 3256 chars]
	I0116 02:51:30.131597    6604 pod_ready.go:92] pod "kube-proxy-h977r" in "kube-system" namespace has status "Ready":"True"
	I0116 02:51:30.131689    6604 pod_ready.go:81] duration metric: took 358.3603ms waiting for pod "kube-proxy-h977r" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:30.131689    6604 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tpc2g" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:30.327867    6604 request.go:629] Waited for 195.8049ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tpc2g
	I0116 02:51:30.328159    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tpc2g
	I0116 02:51:30.328159    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:30.328159    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:30.328159    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:30.332873    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:30.332967    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:30.332967    6604 round_trippers.go:580]     Audit-Id: a60b39e0-4742-4bb9-8c24-0accf19c582b
	I0116 02:51:30.332967    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:30.332967    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:30.332967    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:30.332967    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:30.332967    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:30 GMT
	I0116 02:51:30.333503    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tpc2g","generateName":"kube-proxy-","namespace":"kube-system","uid":"0cb279ef-9d3a-4c55-9c57-ce7eede8a052","resourceVersion":"368","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0116 02:51:30.529285    6604 request.go:629] Waited for 194.1252ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:51:30.529285    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:51:30.529285    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:30.529285    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:30.529285    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:30.532938    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:30.532938    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:30.532938    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:30.532938    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:30.532938    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:30.532938    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:30 GMT
	I0116 02:51:30.533877    6604 round_trippers.go:580]     Audit-Id: c5ecd874-69e8-4dc4-9959-68c54a3e6e44
	I0116 02:51:30.533877    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:30.534221    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"420","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0116 02:51:30.534311    6604 pod_ready.go:92] pod "kube-proxy-tpc2g" in "kube-system" namespace has status "Ready":"True"
	I0116 02:51:30.534311    6604 pod_ready.go:81] duration metric: took 402.6191ms waiting for pod "kube-proxy-tpc2g" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:30.534311    6604 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:30.734116    6604 request.go:629] Waited for 199.6113ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-853900
	I0116 02:51:30.734116    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-853900
	I0116 02:51:30.734403    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:30.734440    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:30.734440    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:30.737788    6604 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 02:51:30.737788    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:30.737788    6604 round_trippers.go:580]     Audit-Id: b82fcb9b-b748-4f71-b862-9d2fec4d3f3c
	I0116 02:51:30.737788    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:30.738565    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:30.738565    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:30.738565    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:30.738565    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:30 GMT
	I0116 02:51:30.738997    6604 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-853900","namespace":"kube-system","uid":"d75db7e3-c171-428f-9c08-f268ce16da31","resourceVersion":"354","creationTimestamp":"2024-01-16T02:48:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aff36fe37a6d6fc8d309826a0f54f93d","kubernetes.io/config.mirror":"aff36fe37a6d6fc8d309826a0f54f93d","kubernetes.io/config.seen":"2024-01-16T02:48:09.211494477Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0116 02:51:30.920022    6604 request.go:629] Waited for 180.36ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:51:30.920224    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes/multinode-853900
	I0116 02:51:30.920336    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:30.920336    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:30.920336    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:30.924985    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:30.924985    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:30.924985    6604 round_trippers.go:580]     Audit-Id: 75847699-e318-4b8c-82f6-ada695bcff4b
	I0116 02:51:30.924985    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:30.925197    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:30.925197    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:30.925197    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:30.925197    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:30 GMT
	I0116 02:51:30.925390    6604 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"420","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0116 02:51:30.925653    6604 pod_ready.go:92] pod "kube-scheduler-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 02:51:30.925653    6604 pod_ready.go:81] duration metric: took 391.3395ms waiting for pod "kube-scheduler-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 02:51:30.925653    6604 pod_ready.go:38] duration metric: took 1.1986006s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 02:51:30.925653    6604 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 02:51:30.939464    6604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 02:51:30.960295    6604 system_svc.go:56] duration metric: took 34.6418ms WaitForService to wait for kubelet.
	I0116 02:51:30.960377    6604 kubeadm.go:581] duration metric: took 17.7961824s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 02:51:30.960459    6604 node_conditions.go:102] verifying NodePressure condition ...
	I0116 02:51:31.123885    6604 request.go:629] Waited for 163.0578ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.112.69:8443/api/v1/nodes
	I0116 02:51:31.123885    6604 round_trippers.go:463] GET https://172.27.112.69:8443/api/v1/nodes
	I0116 02:51:31.124012    6604 round_trippers.go:469] Request Headers:
	I0116 02:51:31.124012    6604 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 02:51:31.124057    6604 round_trippers.go:473]     Accept: application/json, */*
	I0116 02:51:31.128543    6604 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 02:51:31.128796    6604 round_trippers.go:577] Response Headers:
	I0116 02:51:31.128796    6604 round_trippers.go:580]     Audit-Id: 70ce9df8-2e99-4e21-959d-c206673bd830
	I0116 02:51:31.128796    6604 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 02:51:31.128796    6604 round_trippers.go:580]     Content-Type: application/json
	I0116 02:51:31.128796    6604 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 02:51:31.128796    6604 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 02:51:31.128796    6604 round_trippers.go:580]     Date: Tue, 16 Jan 2024 02:51:31 GMT
	I0116 02:51:31.129400    6604 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"607"},"items":[{"metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"420","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9139 chars]
	I0116 02:51:31.130165    6604 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:51:31.130257    6604 node_conditions.go:123] node cpu capacity is 2
	I0116 02:51:31.130257    6604 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 02:51:31.130326    6604 node_conditions.go:123] node cpu capacity is 2
	I0116 02:51:31.130326    6604 node_conditions.go:105] duration metric: took 169.8441ms to run NodePressure ...
	I0116 02:51:31.130326    6604 start.go:228] waiting for startup goroutines ...
	I0116 02:51:31.130404    6604 start.go:242] writing updated cluster config ...
	I0116 02:51:31.143589    6604 ssh_runner.go:195] Run: rm -f paused
	I0116 02:51:31.312346    6604 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 02:51:31.312937    6604 out.go:177] * Done! kubectl is now configured to use "multinode-853900" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-01-16 02:46:13 UTC, ends at Tue 2024-01-16 02:52:47 UTC. --
	Jan 16 02:48:33 multinode-853900 dockerd[1314]: time="2024-01-16T02:48:33.498509620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 02:48:33 multinode-853900 dockerd[1314]: time="2024-01-16T02:48:33.507628083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 02:48:33 multinode-853900 dockerd[1314]: time="2024-01-16T02:48:33.507725083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 02:48:33 multinode-853900 dockerd[1314]: time="2024-01-16T02:48:33.507755983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 02:48:33 multinode-853900 dockerd[1314]: time="2024-01-16T02:48:33.507770984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 02:48:34 multinode-853900 cri-dockerd[1199]: time="2024-01-16T02:48:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/71976a8048bc5580d44c387e684e1e9267882d3a977d698a79f1b87e942bd9a5/resolv.conf as [nameserver 172.27.112.1]"
	Jan 16 02:48:34 multinode-853900 cri-dockerd[1199]: time="2024-01-16T02:48:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/df918467deeb18f25309ecaed44a3cefe80297f3cc46ab47da405361913298f6/resolv.conf as [nameserver 172.27.112.1]"
	Jan 16 02:48:34 multinode-853900 dockerd[1314]: time="2024-01-16T02:48:34.305945155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 02:48:34 multinode-853900 dockerd[1314]: time="2024-01-16T02:48:34.306092256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 02:48:34 multinode-853900 dockerd[1314]: time="2024-01-16T02:48:34.306114856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 02:48:34 multinode-853900 dockerd[1314]: time="2024-01-16T02:48:34.306126556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 02:48:34 multinode-853900 dockerd[1314]: time="2024-01-16T02:48:34.408424017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 02:48:34 multinode-853900 dockerd[1314]: time="2024-01-16T02:48:34.408714319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 02:48:34 multinode-853900 dockerd[1314]: time="2024-01-16T02:48:34.408817020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 02:48:34 multinode-853900 dockerd[1314]: time="2024-01-16T02:48:34.408833820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 02:51:56 multinode-853900 dockerd[1314]: time="2024-01-16T02:51:56.313161873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 02:51:56 multinode-853900 dockerd[1314]: time="2024-01-16T02:51:56.313268974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 02:51:56 multinode-853900 dockerd[1314]: time="2024-01-16T02:51:56.314488889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 02:51:56 multinode-853900 dockerd[1314]: time="2024-01-16T02:51:56.314595390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 02:51:56 multinode-853900 cri-dockerd[1199]: time="2024-01-16T02:51:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e8afc8bf1589d638faf5045b459ed448a95a1c4814af0c2a0461b09de05a022a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 16 02:51:57 multinode-853900 cri-dockerd[1199]: time="2024-01-16T02:51:57Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jan 16 02:51:57 multinode-853900 dockerd[1314]: time="2024-01-16T02:51:57.966000898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 02:51:57 multinode-853900 dockerd[1314]: time="2024-01-16T02:51:57.966090699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 02:51:57 multinode-853900 dockerd[1314]: time="2024-01-16T02:51:57.966110100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 02:51:57 multinode-853900 dockerd[1314]: time="2024-01-16T02:51:57.966979310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c4b7f3b3d92db       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   50 seconds ago      Running             busybox                   0                   e8afc8bf1589d       busybox-5bc68d56bd-fp6wc
	7c4c2a1e9df5b       ead0a4a53df89                                                                                         4 minutes ago       Running             coredns                   0                   df918467deeb1       coredns-5dd5756b68-62jpz
	c7157c42967e6       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   71976a8048bc5       storage-provisioner
	d0b6d500287e8       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              4 minutes ago       Running             kindnet-cni               0                   b1892deb5d5dd       kindnet-x5nvv
	e4eefc8ffba88       83f6cc407eed8                                                                                         4 minutes ago       Running             kube-proxy                0                   a5cb81c4b523e       kube-proxy-tpc2g
	7f47011532879       e3db313c6dbc0                                                                                         4 minutes ago       Running             kube-scheduler            0                   bcdc39931f56e       kube-scheduler-multinode-853900
	dcdfa712e694d       73deb9a3f7025                                                                                         4 minutes ago       Running             etcd                      0                   c04c5d9608839       etcd-multinode-853900
	f8ce77440648f       d058aa5ab969c                                                                                         4 minutes ago       Running             kube-controller-manager   0                   7477cf9652147       kube-controller-manager-multinode-853900
	e829a48e9f669       7fe0e6f37db33                                                                                         4 minutes ago       Running             kube-apiserver            0                   9f0b4732ffddb       kube-apiserver-multinode-853900
	
	
	==> coredns [7c4c2a1e9df5] <==
	[INFO] 10.244.1.2:57927 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000233802s
	[INFO] 10.244.0.3:51174 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261403s
	[INFO] 10.244.0.3:56016 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000077201s
	[INFO] 10.244.0.3:46500 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088401s
	[INFO] 10.244.0.3:40048 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119302s
	[INFO] 10.244.0.3:60012 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000130501s
	[INFO] 10.244.0.3:53198 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064001s
	[INFO] 10.244.0.3:34162 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070701s
	[INFO] 10.244.0.3:33411 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000589s
	[INFO] 10.244.1.2:45595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217602s
	[INFO] 10.244.1.2:56102 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165902s
	[INFO] 10.244.1.2:39624 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000617s
	[INFO] 10.244.1.2:42716 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054901s
	[INFO] 10.244.0.3:42485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217902s
	[INFO] 10.244.0.3:59644 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230202s
	[INFO] 10.244.0.3:59058 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118801s
	[INFO] 10.244.0.3:60247 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106301s
	[INFO] 10.244.1.2:51965 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000314404s
	[INFO] 10.244.1.2:38409 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155301s
	[INFO] 10.244.1.2:41179 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116001s
	[INFO] 10.244.1.2:35298 - 5 "PTR IN 1.112.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135802s
	[INFO] 10.244.0.3:37147 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167101s
	[INFO] 10.244.0.3:50056 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000463104s
	[INFO] 10.244.0.3:51075 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149101s
	[INFO] 10.244.0.3:43165 - 5 "PTR IN 1.112.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088701s
	
	
	==> describe nodes <==
	Name:               multinode-853900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-853900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=multinode-853900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_48_10_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:48:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-853900
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:52:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:52:14 +0000   Tue, 16 Jan 2024 02:48:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:52:14 +0000   Tue, 16 Jan 2024 02:48:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:52:14 +0000   Tue, 16 Jan 2024 02:48:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:52:14 +0000   Tue, 16 Jan 2024 02:48:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.112.69
	  Hostname:    multinode-853900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4e5e342d4e24168a1a86911d0f33f2b
	  System UUID:                10054ccc-7b49-694b-9027-8f9af2c15e6e
	  Boot ID:                    e8ba7f6d-4a12-4379-9cf1-98031d8da46c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-fp6wc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 coredns-5dd5756b68-62jpz                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m26s
	  kube-system                 etcd-multinode-853900                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m38s
	  kube-system                 kindnet-x5nvv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m26s
	  kube-system                 kube-apiserver-multinode-853900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-controller-manager-multinode-853900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  kube-system                 kube-proxy-tpc2g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-scheduler-multinode-853900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m24s  kube-proxy       
	  Normal  Starting                 4m38s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m38s  kubelet          Node multinode-853900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m38s  kubelet          Node multinode-853900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m38s  kubelet          Node multinode-853900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m38s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m27s  node-controller  Node multinode-853900 event: Registered Node multinode-853900 in Controller
	  Normal  NodeReady                4m15s  kubelet          Node multinode-853900 status is now: NodeReady
	
	
	Name:               multinode-853900-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-853900-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=multinode-853900
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T02_51_12_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:51:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-853900-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 02:52:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 02:52:12 +0000   Tue, 16 Jan 2024 02:51:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 02:52:12 +0000   Tue, 16 Jan 2024 02:51:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 02:52:12 +0000   Tue, 16 Jan 2024 02:51:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 02:52:12 +0000   Tue, 16 Jan 2024 02:51:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.122.78
	  Hostname:    multinode-853900-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 dbcf0764d4e94a19a6341c364d4fd3eb
	  System UUID:                3b004291-ff12-3445-8f31-f8a19c168043
	  Boot ID:                    9c76e379-6d44-4ced-a536-4a26790eccc6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-9t8fh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kindnet-6s9wr               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      96s
	  kube-system                 kube-proxy-h977r            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 87s                kube-proxy       
	  Normal  Starting                 97s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s (x2 over 96s)  kubelet          Node multinode-853900-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s (x2 over 96s)  kubelet          Node multinode-853900-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s (x2 over 96s)  kubelet          Node multinode-853900-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           92s                node-controller  Node multinode-853900-m02 event: Registered Node multinode-853900-m02 in Controller
	  Normal  NodeReady                78s                kubelet          Node multinode-853900-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000411] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.300298] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.067955] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.170564] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.793956] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan16 02:47] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.145407] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[ +30.009800] systemd-fstab-generator[931]: Ignoring "noauto" for root device
	[  +0.576104] systemd-fstab-generator[969]: Ignoring "noauto" for root device
	[  +0.165083] systemd-fstab-generator[980]: Ignoring "noauto" for root device
	[  +0.181929] systemd-fstab-generator[993]: Ignoring "noauto" for root device
	[  +1.348460] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.405351] systemd-fstab-generator[1154]: Ignoring "noauto" for root device
	[  +0.169647] systemd-fstab-generator[1165]: Ignoring "noauto" for root device
	[  +0.158407] systemd-fstab-generator[1176]: Ignoring "noauto" for root device
	[  +0.236608] systemd-fstab-generator[1191]: Ignoring "noauto" for root device
	[ +12.822351] systemd-fstab-generator[1299]: Ignoring "noauto" for root device
	[  +2.604539] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.350888] systemd-fstab-generator[1677]: Ignoring "noauto" for root device
	[Jan16 02:48] kauditd_printk_skb: 29 callbacks suppressed
	[  +8.624038] systemd-fstab-generator[2618]: Ignoring "noauto" for root device
	[ +23.890711] kauditd_printk_skb: 16 callbacks suppressed
	
	
	==> etcd [dcdfa712e694] <==
	{"level":"info","ts":"2024-01-16T02:48:03.427021Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.27.112.69:2379"}
	{"level":"info","ts":"2024-01-16T02:48:03.427327Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:48:03.428633Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-16T02:48:03.431155Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-16T02:48:03.431346Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-16T02:48:03.432222Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d66a7aae143f5129","local-member-id":"5f10da0a0b7b328e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:48:03.45398Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:48:03.454056Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-16T02:48:31.295094Z","caller":"traceutil/trace.go:171","msg":"trace[1470016887] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"162.039371ms","start":"2024-01-16T02:48:31.133027Z","end":"2024-01-16T02:48:31.295066Z","steps":["trace[1470016887] 'process raft request'  (duration: 161.90547ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:49:07.749384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"313.633312ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.27.112.69\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2024-01-16T02:49:07.750223Z","caller":"traceutil/trace.go:171","msg":"trace[535116502] range","detail":"{range_begin:/registry/masterleases/172.27.112.69; range_end:; response_count:1; response_revision:441; }","duration":"314.471608ms","start":"2024-01-16T02:49:07.435736Z","end":"2024-01-16T02:49:07.750208Z","steps":["trace[535116502] 'range keys from in-memory index tree'  (duration: 313.474113ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:49:07.750317Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T02:49:07.435719Z","time spent":"314.586808ms","remote":"127.0.0.1:55022","response type":"/etcdserverpb.KV/Range","request count":0,"request size":38,"response count":1,"response size":156,"request content":"key:\"/registry/masterleases/172.27.112.69\" "}
	{"level":"warn","ts":"2024-01-16T02:49:07.749384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.496658ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T02:49:07.750538Z","caller":"traceutil/trace.go:171","msg":"trace[1837437887] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:441; }","duration":"123.678253ms","start":"2024-01-16T02:49:07.626848Z","end":"2024-01-16T02:49:07.750526Z","steps":["trace[1837437887] 'count revisions from in-memory index tree'  (duration: 122.09286ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-16T02:49:10.989647Z","caller":"traceutil/trace.go:171","msg":"trace[359046505] linearizableReadLoop","detail":"{readStateIndex:467; appliedIndex:466; }","duration":"464.294908ms","start":"2024-01-16T02:49:10.525333Z","end":"2024-01-16T02:49:10.989628Z","steps":["trace[359046505] 'read index received'  (duration: 464.008109ms)","trace[359046505] 'applied index is now lower than readState.Index'  (duration: 286.199µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T02:49:10.989827Z","caller":"traceutil/trace.go:171","msg":"trace[1406526943] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"513.664607ms","start":"2024-01-16T02:49:10.476154Z","end":"2024-01-16T02:49:10.989819Z","steps":["trace[1406526943] 'process raft request'  (duration: 513.204209ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:49:10.989964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.097623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-01-16T02:49:10.990001Z","caller":"traceutil/trace.go:171","msg":"trace[1584454130] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:444; }","duration":"117.145423ms","start":"2024-01-16T02:49:10.872844Z","end":"2024-01-16T02:49:10.98999Z","steps":["trace[1584454130] 'agreement among raft nodes before linearized reading'  (duration: 117.054123ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:49:10.990231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"464.935406ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T02:49:10.990253Z","caller":"traceutil/trace.go:171","msg":"trace[783784399] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:444; }","duration":"464.961006ms","start":"2024-01-16T02:49:10.525287Z","end":"2024-01-16T02:49:10.990248Z","steps":["trace[783784399] 'agreement among raft nodes before linearized reading'  (duration: 464.902106ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T02:49:10.990269Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T02:49:10.52527Z","time spent":"464.994205ms","remote":"127.0.0.1:55014","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-01-16T02:49:10.990321Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T02:49:10.476135Z","time spent":"513.732907ms","remote":"127.0.0.1:55080","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":553,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-853900\" mod_revision:436 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-853900\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-853900\" > >"}
	{"level":"info","ts":"2024-01-16T02:49:13.214888Z","caller":"traceutil/trace.go:171","msg":"trace[1391090737] linearizableReadLoop","detail":"{readStateIndex:469; appliedIndex:468; }","duration":"209.295214ms","start":"2024-01-16T02:49:13.005574Z","end":"2024-01-16T02:49:13.21487Z","steps":["trace[1391090737] 'read index received'  (duration: 208.934915ms)","trace[1391090737] 'applied index is now lower than readState.Index'  (duration: 358.599µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-16T02:49:13.215081Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.638312ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-01-16T02:49:13.215428Z","caller":"traceutil/trace.go:171","msg":"trace[1370022832] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:445; }","duration":"209.933011ms","start":"2024-01-16T02:49:13.005415Z","end":"2024-01-16T02:49:13.215348Z","steps":["trace[1370022832] 'agreement among raft nodes before linearized reading'  (duration: 209.595212ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:52:47 up 6 min,  0 users,  load average: 0.63, 0.52, 0.26
	Linux multinode-853900 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [d0b6d500287e] <==
	I0116 02:51:42.994338       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 02:51:53.000709       1 main.go:223] Handling node with IPs: map[172.27.112.69:{}]
	I0116 02:51:53.000853       1 main.go:227] handling current node
	I0116 02:51:53.000869       1 main.go:223] Handling node with IPs: map[172.27.122.78:{}]
	I0116 02:51:53.000931       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 02:52:03.011700       1 main.go:223] Handling node with IPs: map[172.27.112.69:{}]
	I0116 02:52:03.011892       1 main.go:227] handling current node
	I0116 02:52:03.011923       1 main.go:223] Handling node with IPs: map[172.27.122.78:{}]
	I0116 02:52:03.012030       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 02:52:13.022091       1 main.go:223] Handling node with IPs: map[172.27.112.69:{}]
	I0116 02:52:13.022135       1 main.go:227] handling current node
	I0116 02:52:13.022218       1 main.go:223] Handling node with IPs: map[172.27.122.78:{}]
	I0116 02:52:13.022249       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 02:52:23.038019       1 main.go:223] Handling node with IPs: map[172.27.112.69:{}]
	I0116 02:52:23.038050       1 main.go:227] handling current node
	I0116 02:52:23.038067       1 main.go:223] Handling node with IPs: map[172.27.122.78:{}]
	I0116 02:52:23.038074       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 02:52:33.050661       1 main.go:223] Handling node with IPs: map[172.27.112.69:{}]
	I0116 02:52:33.051080       1 main.go:227] handling current node
	I0116 02:52:33.051121       1 main.go:223] Handling node with IPs: map[172.27.122.78:{}]
	I0116 02:52:33.051164       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 02:52:43.061733       1 main.go:223] Handling node with IPs: map[172.27.112.69:{}]
	I0116 02:52:43.061900       1 main.go:227] handling current node
	I0116 02:52:43.061918       1 main.go:223] Handling node with IPs: map[172.27.122.78:{}]
	I0116 02:52:43.061927       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [e829a48e9f66] <==
	I0116 02:48:05.632402       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0116 02:48:05.632708       1 aggregator.go:166] initial CRD sync complete...
	I0116 02:48:05.632927       1 autoregister_controller.go:141] Starting autoregister controller
	I0116 02:48:05.633456       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0116 02:48:05.633626       1 cache.go:39] Caches are synced for autoregister controller
	I0116 02:48:05.665194       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0116 02:48:06.418608       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0116 02:48:06.425112       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0116 02:48:06.425306       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0116 02:48:07.267887       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 02:48:07.329150       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0116 02:48:07.439958       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0116 02:48:07.449966       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [172.27.112.69]
	I0116 02:48:07.451513       1 controller.go:624] quota admission added evaluator for: endpoints
	I0116 02:48:07.461675       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 02:48:07.550004       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0116 02:48:09.040060       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0116 02:48:09.052606       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0116 02:48:09.072722       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0116 02:48:21.062615       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0116 02:48:21.313694       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0116 02:49:10.991111       1 trace.go:236] Trace[1234057078]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:08cf0022-f31f-43b5-9a10-2f63eb349ae5,client:172.27.112.69,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-853900,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (16-Jan-2024 02:49:10.474) (total time: 516ms):
	Trace[1234057078]: ["GuaranteedUpdate etcd3" audit-id:08cf0022-f31f-43b5-9a10-2f63eb349ae5,key:/leases/kube-node-lease/multinode-853900,type:*coordination.Lease,resource:leases.coordination.k8s.io 516ms (02:49:10.474)
	Trace[1234057078]:  ---"Txn call completed" 515ms (02:49:10.990)]
	Trace[1234057078]: [516.889294ms] [516.889294ms] END
	
	
	==> kube-controller-manager [f8ce77440648] <==
	I0116 02:48:21.805953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.401µs"
	I0116 02:48:32.981233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.9µs"
	I0116 02:48:33.019369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.6µs"
	I0116 02:48:35.239448       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="244.101µs"
	I0116 02:48:35.283532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.645864ms"
	I0116 02:48:35.285567       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.401µs"
	I0116 02:48:35.553911       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0116 02:51:11.090108       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-853900-m02\" does not exist"
	I0116 02:51:11.124720       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-853900-m02" podCIDRs=["10.244.1.0/24"]
	I0116 02:51:11.128235       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h977r"
	I0116 02:51:11.128274       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6s9wr"
	I0116 02:51:15.584645       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-853900-m02"
	I0116 02:51:15.584713       1 event.go:307] "Event occurred" object="multinode-853900-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-853900-m02 event: Registered Node multinode-853900-m02 in Controller"
	I0116 02:51:29.405351       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853900-m02"
	I0116 02:51:55.815279       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0116 02:51:55.837833       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-9t8fh"
	I0116 02:51:55.849990       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-fp6wc"
	I0116 02:51:55.866270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="51.615325ms"
	I0116 02:51:55.886103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="19.750239ms"
	I0116 02:51:55.915560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.317355ms"
	I0116 02:51:55.915940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="165.602µs"
	I0116 02:51:58.125598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.719425ms"
	I0116 02:51:58.125666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.701µs"
	I0116 02:51:58.865374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.040006ms"
	I0116 02:51:58.866717       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.601µs"
	
	
	==> kube-proxy [e4eefc8ffba8] <==
	I0116 02:48:22.633014       1 server_others.go:69] "Using iptables proxy"
	I0116 02:48:22.649027       1 node.go:141] Successfully retrieved node IP: 172.27.112.69
	I0116 02:48:22.715154       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 02:48:22.715510       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 02:48:22.719363       1 server_others.go:152] "Using iptables Proxier"
	I0116 02:48:22.719544       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 02:48:22.720518       1 server.go:846] "Version info" version="v1.28.4"
	I0116 02:48:22.720540       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 02:48:22.721403       1 config.go:188] "Starting service config controller"
	I0116 02:48:22.721551       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 02:48:22.721582       1 config.go:97] "Starting endpoint slice config controller"
	I0116 02:48:22.721589       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 02:48:22.725929       1 config.go:315] "Starting node config controller"
	I0116 02:48:22.726027       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 02:48:22.822773       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 02:48:22.822835       1 shared_informer.go:318] Caches are synced for service config
	I0116 02:48:22.826164       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7f4701153287] <==
	W0116 02:48:05.627329       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:48:05.627357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 02:48:05.627963       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0116 02:48:05.629847       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0116 02:48:06.437001       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 02:48:06.437032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 02:48:06.482504       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 02:48:06.483065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 02:48:06.509167       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:48:06.509467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 02:48:06.557817       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:48:06.557847       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 02:48:06.749156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 02:48:06.749269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 02:48:06.780250       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 02:48:06.780472       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 02:48:06.793910       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:48:06.794128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 02:48:06.797405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 02:48:06.797622       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 02:48:06.893978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 02:48:06.894377       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 02:48:07.103507       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:48:07.103541       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0116 02:48:08.799264       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 02:46:13 UTC, ends at Tue 2024-01-16 02:52:47 UTC. --
	Jan 16 02:48:33 multinode-853900 kubelet[2639]: I0116 02:48:33.105036    2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj4x5\" (UniqueName: \"kubernetes.io/projected/c028c1eb-0071-40bf-a163-6f71a10dc945-kube-api-access-sj4x5\") pod \"coredns-5dd5756b68-62jpz\" (UID: \"c028c1eb-0071-40bf-a163-6f71a10dc945\") " pod="kube-system/coredns-5dd5756b68-62jpz"
	Jan 16 02:48:33 multinode-853900 kubelet[2639]: I0116 02:48:33.105357    2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c028c1eb-0071-40bf-a163-6f71a10dc945-config-volume\") pod \"coredns-5dd5756b68-62jpz\" (UID: \"c028c1eb-0071-40bf-a163-6f71a10dc945\") " pod="kube-system/coredns-5dd5756b68-62jpz"
	Jan 16 02:48:33 multinode-853900 kubelet[2639]: I0116 02:48:33.105509    2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdfbm\" (UniqueName: \"kubernetes.io/projected/5a08e24f-688d-4839-9157-d9a0b92bd32c-kube-api-access-mdfbm\") pod \"storage-provisioner\" (UID: \"5a08e24f-688d-4839-9157-d9a0b92bd32c\") " pod="kube-system/storage-provisioner"
	Jan 16 02:48:34 multinode-853900 kubelet[2639]: I0116 02:48:34.141936    2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71976a8048bc5580d44c387e684e1e9267882d3a977d698a79f1b87e942bd9a5"
	Jan 16 02:48:34 multinode-853900 kubelet[2639]: I0116 02:48:34.186481    2639 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df918467deeb18f25309ecaed44a3cefe80297f3cc46ab47da405361913298f6"
	Jan 16 02:48:35 multinode-853900 kubelet[2639]: I0116 02:48:35.219727    2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.219680473 podCreationTimestamp="2024-01-16 02:48:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:48:35.219423471 +0000 UTC m=+26.214565700" watchObservedRunningTime="2024-01-16 02:48:35.219680473 +0000 UTC m=+26.214822702"
	Jan 16 02:48:35 multinode-853900 kubelet[2639]: I0116 02:48:35.269921    2639 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-62jpz" podStartSLOduration=14.269879677 podCreationTimestamp="2024-01-16 02:48:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-01-16 02:48:35.237736582 +0000 UTC m=+26.232878711" watchObservedRunningTime="2024-01-16 02:48:35.269879677 +0000 UTC m=+26.265021806"
	Jan 16 02:49:09 multinode-853900 kubelet[2639]: E0116 02:49:09.364041    2639 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 02:49:09 multinode-853900 kubelet[2639]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 02:49:09 multinode-853900 kubelet[2639]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 02:49:09 multinode-853900 kubelet[2639]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 02:50:09 multinode-853900 kubelet[2639]: E0116 02:50:09.365283    2639 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 02:50:09 multinode-853900 kubelet[2639]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 02:50:09 multinode-853900 kubelet[2639]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 02:50:09 multinode-853900 kubelet[2639]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 02:51:09 multinode-853900 kubelet[2639]: E0116 02:51:09.365085    2639 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 02:51:09 multinode-853900 kubelet[2639]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 02:51:09 multinode-853900 kubelet[2639]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 02:51:09 multinode-853900 kubelet[2639]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 02:51:55 multinode-853900 kubelet[2639]: I0116 02:51:55.864723    2639 topology_manager.go:215] "Topology Admit Handler" podUID="270994d2-9d51-4495-8d56-1808af062ea0" podNamespace="default" podName="busybox-5bc68d56bd-fp6wc"
	Jan 16 02:51:55 multinode-853900 kubelet[2639]: I0116 02:51:55.870122    2639 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsssb\" (UniqueName: \"kubernetes.io/projected/270994d2-9d51-4495-8d56-1808af062ea0-kube-api-access-jsssb\") pod \"busybox-5bc68d56bd-fp6wc\" (UID: \"270994d2-9d51-4495-8d56-1808af062ea0\") " pod="default/busybox-5bc68d56bd-fp6wc"
	Jan 16 02:52:09 multinode-853900 kubelet[2639]: E0116 02:52:09.363416    2639 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 02:52:09 multinode-853900 kubelet[2639]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 02:52:09 multinode-853900 kubelet[2639]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 02:52:09 multinode-853900 kubelet[2639]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:52:39.301711   12560 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-853900 -n multinode-853900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-853900 -n multinode-853900: (12.1408534s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-853900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (57.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (537.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-853900
multinode_test.go:318: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-853900
multinode_test.go:318: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-853900: (1m21.1355091s)
multinode_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-853900 --wait=true -v=8 --alsologtostderr
E0116 03:08:13.027314   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 03:08:46.609173   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 03:11:49.837902   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 03:13:13.039548   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 03:13:46.612756   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
multinode_test.go:323: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-853900 --wait=true -v=8 --alsologtostderr: (6m59.9011574s)
multinode_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-853900
multinode_test.go:335: reported node list is not the same after restart. Before restart: multinode-853900	172.27.112.69
multinode-853900-m02	172.27.122.78
multinode-853900-m03	172.27.116.8

                                                
                                                
After restart: multinode-853900	172.27.125.182
multinode-853900-m02	172.27.125.77
multinode-853900-m03	172.27.125.42
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-853900 -n multinode-853900
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-853900 -n multinode-853900: (12.2557105s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 logs -n 25: (8.9125912s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-853900 ssh -n                                                                                                  | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-853900-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-853900 cp multinode-853900-m02:/home/docker/cp-test.txt                                                        | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3874047493\001\cp-test_multinode-853900-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-853900 ssh -n                                                                                                  | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-853900-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-853900 cp multinode-853900-m02:/home/docker/cp-test.txt                                                        | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 02:59 UTC | 16 Jan 24 02:59 UTC |
	|         | multinode-853900:/home/docker/cp-test_multinode-853900-m02_multinode-853900.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-853900 ssh -n                                                                                                  | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:00 UTC | 16 Jan 24 03:00 UTC |
	|         | multinode-853900-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-853900 ssh -n multinode-853900 sudo cat                                                                        | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:00 UTC | 16 Jan 24 03:00 UTC |
	|         | /home/docker/cp-test_multinode-853900-m02_multinode-853900.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-853900 cp multinode-853900-m02:/home/docker/cp-test.txt                                                        | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:00 UTC | 16 Jan 24 03:00 UTC |
	|         | multinode-853900-m03:/home/docker/cp-test_multinode-853900-m02_multinode-853900-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-853900 ssh -n                                                                                                  | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:00 UTC | 16 Jan 24 03:00 UTC |
	|         | multinode-853900-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-853900 ssh -n multinode-853900-m03 sudo cat                                                                    | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:00 UTC | 16 Jan 24 03:00 UTC |
	|         | /home/docker/cp-test_multinode-853900-m02_multinode-853900-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-853900 cp testdata\cp-test.txt                                                                                 | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:00 UTC | 16 Jan 24 03:01 UTC |
	|         | multinode-853900-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-853900 ssh -n                                                                                                  | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:01 UTC | 16 Jan 24 03:01 UTC |
	|         | multinode-853900-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-853900 cp multinode-853900-m03:/home/docker/cp-test.txt                                                        | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:01 UTC | 16 Jan 24 03:01 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3874047493\001\cp-test_multinode-853900-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-853900 ssh -n                                                                                                  | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:01 UTC | 16 Jan 24 03:01 UTC |
	|         | multinode-853900-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-853900 cp multinode-853900-m03:/home/docker/cp-test.txt                                                        | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:01 UTC | 16 Jan 24 03:01 UTC |
	|         | multinode-853900:/home/docker/cp-test_multinode-853900-m03_multinode-853900.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-853900 ssh -n                                                                                                  | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:01 UTC | 16 Jan 24 03:01 UTC |
	|         | multinode-853900-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-853900 ssh -n multinode-853900 sudo cat                                                                        | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:01 UTC | 16 Jan 24 03:02 UTC |
	|         | /home/docker/cp-test_multinode-853900-m03_multinode-853900.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-853900 cp multinode-853900-m03:/home/docker/cp-test.txt                                                        | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:02 UTC | 16 Jan 24 03:02 UTC |
	|         | multinode-853900-m02:/home/docker/cp-test_multinode-853900-m03_multinode-853900-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-853900 ssh -n                                                                                                  | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:02 UTC | 16 Jan 24 03:02 UTC |
	|         | multinode-853900-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-853900 ssh -n multinode-853900-m02 sudo cat                                                                    | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:02 UTC | 16 Jan 24 03:02 UTC |
	|         | /home/docker/cp-test_multinode-853900-m03_multinode-853900-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-853900 node stop m03                                                                                           | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:02 UTC | 16 Jan 24 03:02 UTC |
	| node    | multinode-853900 node start                                                                                              | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:03 UTC | 16 Jan 24 03:05 UTC |
	|         | m03 --alsologtostderr                                                                                                    |                  |                   |         |                     |                     |
	| node    | list -p multinode-853900                                                                                                 | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:06 UTC |                     |
	| stop    | -p multinode-853900                                                                                                      | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:06 UTC | 16 Jan 24 03:07 UTC |
	| start   | -p multinode-853900                                                                                                      | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:07 UTC | 16 Jan 24 03:14 UTC |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	| node    | list -p multinode-853900                                                                                                 | multinode-853900 | minikube3\jenkins | v1.32.0 | 16 Jan 24 03:14 UTC |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 03:07:55
	Running on machine: minikube3
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 03:07:55.690430    5244 out.go:296] Setting OutFile to fd 696 ...
	I0116 03:07:55.691127    5244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:07:55.691127    5244 out.go:309] Setting ErrFile to fd 736...
	I0116 03:07:55.691127    5244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:07:55.711861    5244 out.go:303] Setting JSON to false
	I0116 03:07:55.714807    5244 start.go:128] hostinfo: {"hostname":"minikube3","uptime":52866,"bootTime":1705321609,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0116 03:07:55.714807    5244 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0116 03:07:55.715798    5244 out.go:177] * [multinode-853900] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0116 03:07:55.716961    5244 notify.go:220] Checking for updates...
	I0116 03:07:55.716961    5244 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:07:55.717718    5244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 03:07:55.718350    5244 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0116 03:07:55.718952    5244 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 03:07:55.719567    5244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 03:07:55.720954    5244 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:07:55.721950    5244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 03:08:01.003483    5244 out.go:177] * Using the hyperv driver based on existing profile
	I0116 03:08:01.004575    5244 start.go:298] selected driver: hyperv
	I0116 03:08:01.004575    5244 start.go:902] validating driver "hyperv" against &{Name:multinode-853900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.112.69 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.122.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.116.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:08:01.004636    5244 start.go:913] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0116 03:08:01.057125    5244 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0116 03:08:01.057125    5244 cni.go:84] Creating CNI manager for ""
	I0116 03:08:01.057125    5244 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:08:01.057125    5244 start_flags.go:321] config:
	{Name:multinode-853900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-853900 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.112.69 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.122.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.116.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:08:01.057901    5244 iso.go:125] acquiring lock: {Name:mk2c0b62d272a573835231fdc54419c800e07e34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 03:08:01.059600    5244 out.go:177] * Starting control plane node multinode-853900 in cluster multinode-853900
	I0116 03:08:01.059600    5244 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0116 03:08:01.060302    5244 preload.go:148] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0116 03:08:01.060302    5244 cache.go:56] Caching tarball of preloaded images
	I0116 03:08:01.060834    5244 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0116 03:08:01.060834    5244 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0116 03:08:01.061347    5244 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json ...
	I0116 03:08:01.064018    5244 start.go:365] acquiring machines lock for multinode-853900: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:08:01.064261    5244 start.go:369] acquired machines lock for "multinode-853900" in 109.8µs
	I0116 03:08:01.064301    5244 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:08:01.064301    5244 fix.go:54] fixHost starting: 
	I0116 03:08:01.064842    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:08:03.799533    5244 main.go:141] libmachine: [stdout =====>] : Off
	
	I0116 03:08:03.799710    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:03.799783    5244 fix.go:102] recreateIfNeeded on multinode-853900: state=Stopped err=<nil>
	W0116 03:08:03.799783    5244 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:08:03.800881    5244 out.go:177] * Restarting existing hyperv VM for "multinode-853900" ...
	I0116 03:08:03.801427    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-853900
	I0116 03:08:06.698224    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:08:06.698299    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:06.698299    5244 main.go:141] libmachine: Waiting for host to start...
	I0116 03:08:06.698387    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:08:08.910961    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:08:08.910992    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:08.911052    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:08:11.406166    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:08:11.406329    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:12.410423    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:08:14.600785    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:08:14.600873    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:14.601017    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:08:17.101963    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:08:17.102412    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:18.103546    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:08:20.279286    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:08:20.279334    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:20.279429    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:08:22.866377    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:08:22.866669    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:23.868854    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:08:26.061427    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:08:26.061469    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:26.061469    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:08:28.567788    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:08:28.568035    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:29.577975    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:08:31.784690    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:08:31.784690    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:31.784771    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:08:34.376384    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:08:34.376384    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:34.379349    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:08:36.510232    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:08:36.510232    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:36.510232    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:08:38.987750    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:08:38.987914    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:38.988238    5244 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json ...
	I0116 03:08:38.991107    5244 machine.go:88] provisioning docker machine ...
	I0116 03:08:38.991215    5244 buildroot.go:166] provisioning hostname "multinode-853900"
	I0116 03:08:38.991373    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:08:41.122612    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:08:41.122905    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:41.122905    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:08:43.650515    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:08:43.650515    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:43.656943    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:08:43.657617    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.182 22 <nil> <nil>}
	I0116 03:08:43.657617    5244 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-853900 && echo "multinode-853900" | sudo tee /etc/hostname
	I0116 03:08:43.834624    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-853900
	
	I0116 03:08:43.834624    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:08:45.955007    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:08:45.955007    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:45.955107    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:08:48.499948    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:08:48.499948    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:48.505487    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:08:48.506213    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.182 22 <nil> <nil>}
	I0116 03:08:48.506213    5244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-853900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-853900/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-853900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:08:48.674281    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:08:48.674281    5244 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0116 03:08:48.674281    5244 buildroot.go:174] setting up certificates
	I0116 03:08:48.674281    5244 provision.go:83] configureAuth start
	I0116 03:08:48.674281    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:08:50.773943    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:08:50.773943    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:50.773943    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:08:53.344567    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:08:53.344767    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:53.344904    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:08:55.466020    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:08:55.466020    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:55.466561    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:08:57.945712    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:08:57.945989    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:08:57.946298    5244 provision.go:138] copyHostCerts
	I0116 03:08:57.946298    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0116 03:08:57.946298    5244 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0116 03:08:57.946814    5244 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0116 03:08:57.947053    5244 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0116 03:08:57.948445    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0116 03:08:57.948445    5244 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0116 03:08:57.948445    5244 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0116 03:08:57.949223    5244 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0116 03:08:57.949991    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0116 03:08:57.950676    5244 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0116 03:08:57.950676    5244 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0116 03:08:57.951387    5244 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0116 03:08:57.952135    5244 provision.go:112] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-853900 san=[172.27.125.182 172.27.125.182 localhost 127.0.0.1 minikube multinode-853900]
	I0116 03:08:58.308191    5244 provision.go:172] copyRemoteCerts
	I0116 03:08:58.330828    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:08:58.330828    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:09:00.394892    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:09:00.395155    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:00.395155    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:09:02.892538    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:09:02.892626    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:02.892897    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 03:09:03.016807    5244 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6859475s)
	I0116 03:09:03.016807    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0116 03:09:03.016807    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:09:03.055736    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0116 03:09:03.055736    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0116 03:09:03.097464    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0116 03:09:03.097734    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:09:03.138495    5244 provision.go:86] duration metric: configureAuth took 14.4641186s
	I0116 03:09:03.138495    5244 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:09:03.139228    5244 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:09:03.139228    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:09:05.194494    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:09:05.194494    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:05.194585    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:09:07.693807    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:09:07.694018    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:07.700571    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:09:07.701296    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.182 22 <nil> <nil>}
	I0116 03:09:07.701810    5244 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0116 03:09:07.861839    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0116 03:09:07.861839    5244 buildroot.go:70] root file system type: tmpfs
	I0116 03:09:07.862129    5244 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0116 03:09:07.862129    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:09:09.971528    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:09:09.971528    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:09.971716    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:09:12.466634    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:09:12.466634    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:12.472918    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:09:12.473491    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.182 22 <nil> <nil>}
	I0116 03:09:12.473615    5244 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0116 03:09:12.636455    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0116 03:09:12.636558    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:09:14.744641    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:09:14.744641    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:14.744768    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:09:17.284803    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:09:17.284803    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:17.291104    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:09:17.291726    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.182 22 <nil> <nil>}
	I0116 03:09:17.291726    5244 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0116 03:09:18.535509    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0116 03:09:18.535509    5244 machine.go:91] provisioned docker machine in 39.544033s
	I0116 03:09:18.535509    5244 start.go:300] post-start starting for "multinode-853900" (driver="hyperv")
	I0116 03:09:18.535509    5244 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:09:18.551657    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:09:18.551657    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:09:20.623801    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:09:20.623971    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:20.624046    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:09:23.103951    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:09:23.104294    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:23.104531    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 03:09:23.230760    5244 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6790724s)
	I0116 03:09:23.245403    5244 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:09:23.252514    5244 command_runner.go:130] > NAME=Buildroot
	I0116 03:09:23.252514    5244 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 03:09:23.252514    5244 command_runner.go:130] > ID=buildroot
	I0116 03:09:23.252514    5244 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 03:09:23.252514    5244 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 03:09:23.252695    5244 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:09:23.252695    5244 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0116 03:09:23.252864    5244 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0116 03:09:23.254242    5244 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> 135082.pem in /etc/ssl/certs
	I0116 03:09:23.254313    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> /etc/ssl/certs/135082.pem
	I0116 03:09:23.267717    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:09:23.283282    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem --> /etc/ssl/certs/135082.pem (1708 bytes)
	I0116 03:09:23.324361    5244 start.go:303] post-start completed in 4.7888207s
	I0116 03:09:23.324450    5244 fix.go:56] fixHost completed within 1m22.2596091s
	I0116 03:09:23.324499    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:09:25.408447    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:09:25.408447    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:25.408447    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:09:27.905485    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:09:27.905485    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:27.911787    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:09:27.911954    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.182 22 <nil> <nil>}
	I0116 03:09:27.911954    5244 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:09:28.069977    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374568.066922496
	
	I0116 03:09:28.070078    5244 fix.go:206] guest clock: 1705374568.066922496
	I0116 03:09:28.070078    5244 fix.go:219] Guest: 2024-01-16 03:09:28.066922496 +0000 UTC Remote: 2024-01-16 03:09:23.3244509 +0000 UTC m=+87.805766201 (delta=4.742471596s)
	I0116 03:09:28.070217    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:09:30.148349    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:09:30.148417    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:30.148417    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:09:32.623307    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:09:32.623372    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:32.629391    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:09:32.629541    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.182 22 <nil> <nil>}
	I0116 03:09:32.629541    5244 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705374568
	I0116 03:09:32.793225    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan 16 03:09:28 UTC 2024
	
	I0116 03:09:32.793225    5244 fix.go:226] clock set: Tue Jan 16 03:09:28 UTC 2024
	 (err=<nil>)
	I0116 03:09:32.793225    5244 start.go:83] releasing machines lock for "multinode-853900", held for 1m31.7283208s
	I0116 03:09:32.793225    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:09:34.894020    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:09:34.894240    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:34.894445    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:09:37.393250    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:09:37.393250    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:37.398523    5244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:09:37.398523    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:09:37.413432    5244 ssh_runner.go:195] Run: cat /version.json
	I0116 03:09:37.413432    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:09:39.563108    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:09:39.563108    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:39.563108    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:09:39.563246    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:39.563246    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:09:39.563246    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:09:42.177262    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:09:42.177262    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:42.177552    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 03:09:42.207234    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:09:42.207234    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:09:42.207234    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 03:09:42.350481    5244 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 03:09:42.350550    5244 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.9519946s)
	I0116 03:09:42.350550    5244 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0116 03:09:42.350550    5244 ssh_runner.go:235] Completed: cat /version.json: (4.9370854s)
	I0116 03:09:42.365779    5244 ssh_runner.go:195] Run: systemctl --version
	I0116 03:09:42.374153    5244 command_runner.go:130] > systemd 247 (247)
	I0116 03:09:42.374153    5244 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0116 03:09:42.387941    5244 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 03:09:42.394510    5244 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0116 03:09:42.395897    5244 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:09:42.408637    5244 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:09:42.432839    5244 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0116 03:09:42.432839    5244 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:09:42.432839    5244 start.go:475] detecting cgroup driver to use...
	I0116 03:09:42.433284    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:09:42.460914    5244 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0116 03:09:42.476079    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0116 03:09:42.507856    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 03:09:42.523851    5244 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0116 03:09:42.537817    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0116 03:09:42.569337    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 03:09:42.597902    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 03:09:42.627394    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 03:09:42.657382    5244 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:09:42.686510    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 03:09:42.716574    5244 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:09:42.730078    5244 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 03:09:42.746123    5244 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:09:42.774504    5244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:09:42.936631    5244 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 03:09:42.967142    5244 start.go:475] detecting cgroup driver to use...
	I0116 03:09:42.981983    5244 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0116 03:09:43.000053    5244 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0116 03:09:43.000053    5244 command_runner.go:130] > [Unit]
	I0116 03:09:43.000053    5244 command_runner.go:130] > Description=Docker Application Container Engine
	I0116 03:09:43.000053    5244 command_runner.go:130] > Documentation=https://docs.docker.com
	I0116 03:09:43.000053    5244 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0116 03:09:43.000053    5244 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0116 03:09:43.000053    5244 command_runner.go:130] > StartLimitBurst=3
	I0116 03:09:43.000053    5244 command_runner.go:130] > StartLimitIntervalSec=60
	I0116 03:09:43.000053    5244 command_runner.go:130] > [Service]
	I0116 03:09:43.000053    5244 command_runner.go:130] > Type=notify
	I0116 03:09:43.000053    5244 command_runner.go:130] > Restart=on-failure
	I0116 03:09:43.000053    5244 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0116 03:09:43.000053    5244 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0116 03:09:43.000053    5244 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0116 03:09:43.000053    5244 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0116 03:09:43.000053    5244 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0116 03:09:43.000053    5244 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0116 03:09:43.000053    5244 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0116 03:09:43.000053    5244 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0116 03:09:43.000053    5244 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0116 03:09:43.000053    5244 command_runner.go:130] > ExecStart=
	I0116 03:09:43.000053    5244 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0116 03:09:43.000053    5244 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0116 03:09:43.000053    5244 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0116 03:09:43.000053    5244 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0116 03:09:43.000053    5244 command_runner.go:130] > LimitNOFILE=infinity
	I0116 03:09:43.000053    5244 command_runner.go:130] > LimitNPROC=infinity
	I0116 03:09:43.000053    5244 command_runner.go:130] > LimitCORE=infinity
	I0116 03:09:43.000053    5244 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0116 03:09:43.000053    5244 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0116 03:09:43.000053    5244 command_runner.go:130] > TasksMax=infinity
	I0116 03:09:43.000053    5244 command_runner.go:130] > TimeoutStartSec=0
	I0116 03:09:43.000053    5244 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0116 03:09:43.000053    5244 command_runner.go:130] > Delegate=yes
	I0116 03:09:43.000053    5244 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0116 03:09:43.000053    5244 command_runner.go:130] > KillMode=process
	I0116 03:09:43.000053    5244 command_runner.go:130] > [Install]
	I0116 03:09:43.000053    5244 command_runner.go:130] > WantedBy=multi-user.target
	I0116 03:09:43.015323    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:09:43.049366    5244 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:09:43.085340    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:09:43.118430    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 03:09:43.150395    5244 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 03:09:43.201017    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 03:09:43.221534    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:09:43.246241    5244 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0116 03:09:43.261863    5244 ssh_runner.go:195] Run: which cri-dockerd
	I0116 03:09:43.266423    5244 command_runner.go:130] > /usr/bin/cri-dockerd
	I0116 03:09:43.278391    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0116 03:09:43.292753    5244 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0116 03:09:43.332129    5244 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0116 03:09:43.495187    5244 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0116 03:09:43.644708    5244 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0116 03:09:43.644708    5244 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0116 03:09:43.684391    5244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:09:43.838477    5244 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0116 03:09:45.412299    5244 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5738113s)
	I0116 03:09:45.428086    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0116 03:09:45.461360    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0116 03:09:45.494736    5244 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0116 03:09:45.668145    5244 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0116 03:09:45.825325    5244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:09:45.982189    5244 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0116 03:09:46.019929    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0116 03:09:46.051893    5244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:09:46.222901    5244 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0116 03:09:46.327723    5244 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0116 03:09:46.342340    5244 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0116 03:09:46.350472    5244 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0116 03:09:46.350551    5244 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 03:09:46.350551    5244 command_runner.go:130] > Device: 16h/22d	Inode: 838         Links: 1
	I0116 03:09:46.350551    5244 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0116 03:09:46.350551    5244 command_runner.go:130] > Access: 2024-01-16 03:09:46.236197365 +0000
	I0116 03:09:46.350551    5244 command_runner.go:130] > Modify: 2024-01-16 03:09:46.236197365 +0000
	I0116 03:09:46.350551    5244 command_runner.go:130] > Change: 2024-01-16 03:09:46.240197365 +0000
	I0116 03:09:46.350644    5244 command_runner.go:130] >  Birth: -
	I0116 03:09:46.350644    5244 start.go:543] Will wait 60s for crictl version
	I0116 03:09:46.365178    5244 ssh_runner.go:195] Run: which crictl
	I0116 03:09:46.370406    5244 command_runner.go:130] > /usr/bin/crictl
	I0116 03:09:46.386706    5244 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:09:46.456250    5244 command_runner.go:130] > Version:  0.1.0
	I0116 03:09:46.456328    5244 command_runner.go:130] > RuntimeName:  docker
	I0116 03:09:46.456328    5244 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0116 03:09:46.456328    5244 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 03:09:46.456388    5244 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0116 03:09:46.468047    5244 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0116 03:09:46.501141    5244 command_runner.go:130] > 24.0.7
	I0116 03:09:46.512019    5244 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0116 03:09:46.548775    5244 command_runner.go:130] > 24.0.7
	I0116 03:09:46.550016    5244 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0116 03:09:46.550139    5244 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0116 03:09:46.554991    5244 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0116 03:09:46.555516    5244 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0116 03:09:46.555516    5244 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0116 03:09:46.555516    5244 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:4e:7e Flags:up|broadcast|multicast|running}
	I0116 03:09:46.558721    5244 ip.go:210] interface addr: fe80::d699:fcba:3e2b:1549/64
	I0116 03:09:46.559256    5244 ip.go:210] interface addr: 172.27.112.1/20
	I0116 03:09:46.572135    5244 ssh_runner.go:195] Run: grep 172.27.112.1	host.minikube.internal$ /etc/hosts
	I0116 03:09:46.577509    5244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:09:46.594478    5244 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0116 03:09:46.606240    5244 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0116 03:09:46.638757    5244 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0116 03:09:46.638757    5244 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0116 03:09:46.638757    5244 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0116 03:09:46.638937    5244 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0116 03:09:46.638937    5244 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0116 03:09:46.638937    5244 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0116 03:09:46.638937    5244 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0116 03:09:46.638937    5244 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0116 03:09:46.638937    5244 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:09:46.638937    5244 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0116 03:09:46.638937    5244 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0116 03:09:46.639080    5244 docker.go:615] Images already preloaded, skipping extraction
	I0116 03:09:46.648774    5244 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0116 03:09:46.675778    5244 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.28.4
	I0116 03:09:46.675778    5244 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.28.4
	I0116 03:09:46.675937    5244 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.28.4
	I0116 03:09:46.675937    5244 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.28.4
	I0116 03:09:46.675937    5244 command_runner.go:130] > kindest/kindnetd:v20230809-80a64d96
	I0116 03:09:46.675937    5244 command_runner.go:130] > registry.k8s.io/etcd:3.5.9-0
	I0116 03:09:46.675937    5244 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0116 03:09:46.675937    5244 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0116 03:09:46.675937    5244 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0116 03:09:46.675937    5244 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0116 03:09:46.676261    5244 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	kindest/kindnetd:v20230809-80a64d96
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0116 03:09:46.676261    5244 cache_images.go:84] Images are preloaded, skipping loading
	I0116 03:09:46.687204    5244 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0116 03:09:46.719050    5244 command_runner.go:130] > cgroupfs
	I0116 03:09:46.719835    5244 cni.go:84] Creating CNI manager for ""
	I0116 03:09:46.720130    5244 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:09:46.720130    5244 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:09:46.720235    5244 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.125.182 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-853900 NodeName:multinode-853900 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.125.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.125.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:09:46.720454    5244 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.125.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-853900"
	  kubeletExtraArgs:
	    node-ip: 172.27.125.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.125.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:09:46.720551    5244 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-853900 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.125.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:09:46.733965    5244 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:09:46.749846    5244 command_runner.go:130] > kubeadm
	I0116 03:09:46.749846    5244 command_runner.go:130] > kubectl
	I0116 03:09:46.749846    5244 command_runner.go:130] > kubelet
	I0116 03:09:46.750667    5244 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:09:46.764840    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0116 03:09:46.777939    5244 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0116 03:09:46.803937    5244 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:09:46.828766    5244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0116 03:09:46.872122    5244 ssh_runner.go:195] Run: grep 172.27.125.182	control-plane.minikube.internal$ /etc/hosts
	I0116 03:09:46.877424    5244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.125.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:09:46.899090    5244 certs.go:56] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900 for IP: 172.27.125.182
	I0116 03:09:46.899187    5244 certs.go:190] acquiring lock for shared ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:09:46.899795    5244 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0116 03:09:46.900086    5244 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0116 03:09:46.901002    5244 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\client.key
	I0116 03:09:46.901085    5244 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key.8b0109c8
	I0116 03:09:46.901253    5244 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt.8b0109c8 with IP's: [172.27.125.182 10.96.0.1 127.0.0.1 10.0.0.1]
	I0116 03:09:46.987601    5244 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt.8b0109c8 ...
	I0116 03:09:46.987601    5244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt.8b0109c8: {Name:mkde1435c1ecf60d2d463fc67253dd7ff5126a21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:09:46.989434    5244 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key.8b0109c8 ...
	I0116 03:09:46.989434    5244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key.8b0109c8: {Name:mk8d107db04942723507b96fd90b684e15cf9afa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:09:46.989985    5244 certs.go:337] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt.8b0109c8 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt
	I0116 03:09:47.002021    5244 certs.go:341] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key.8b0109c8 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key
	I0116 03:09:47.003096    5244 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.key
	I0116 03:09:47.003096    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0116 03:09:47.004108    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0116 03:09:47.004458    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0116 03:09:47.004458    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0116 03:09:47.004458    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 03:09:47.004458    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0116 03:09:47.004458    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 03:09:47.005472    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 03:09:47.006452    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem (1338 bytes)
	W0116 03:09:47.006747    5244 certs.go:433] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508_empty.pem, impossibly tiny 0 bytes
	I0116 03:09:47.006819    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0116 03:09:47.007176    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0116 03:09:47.007523    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0116 03:09:47.007840    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0116 03:09:47.008471    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem (1708 bytes)
	I0116 03:09:47.008727    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> /usr/share/ca-certificates/135082.pem
	I0116 03:09:47.009135    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:09:47.009135    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem -> /usr/share/ca-certificates/13508.pem
	I0116 03:09:47.010642    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0116 03:09:47.049550    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0116 03:09:47.089101    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0116 03:09:47.126164    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0116 03:09:47.167906    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:09:47.209033    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 03:09:47.246836    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:09:47.285894    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:09:47.321840    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem --> /usr/share/ca-certificates/135082.pem (1708 bytes)
	I0116 03:09:47.358153    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:09:47.392933    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem --> /usr/share/ca-certificates/13508.pem (1338 bytes)
	I0116 03:09:47.427868    5244 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0116 03:09:47.469028    5244 ssh_runner.go:195] Run: openssl version
	I0116 03:09:47.476956    5244 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 03:09:47.491299    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135082.pem && ln -fs /usr/share/ca-certificates/135082.pem /etc/ssl/certs/135082.pem"
	I0116 03:09:47.522077    5244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135082.pem
	I0116 03:09:47.527167    5244 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 01:53 /usr/share/ca-certificates/135082.pem
	I0116 03:09:47.528115    5244 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 01:53 /usr/share/ca-certificates/135082.pem
	I0116 03:09:47.541000    5244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135082.pem
	I0116 03:09:47.548237    5244 command_runner.go:130] > 3ec20f2e
	I0116 03:09:47.560682    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/135082.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:09:47.591471    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:09:47.621254    5244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:09:47.627321    5244 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 01:40 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:09:47.627391    5244 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 01:40 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:09:47.640469    5244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:09:47.647117    5244 command_runner.go:130] > b5213941
	I0116 03:09:47.661366    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:09:47.694460    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13508.pem && ln -fs /usr/share/ca-certificates/13508.pem /etc/ssl/certs/13508.pem"
	I0116 03:09:47.722977    5244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13508.pem
	I0116 03:09:47.729148    5244 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 01:53 /usr/share/ca-certificates/13508.pem
	I0116 03:09:47.729496    5244 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 01:53 /usr/share/ca-certificates/13508.pem
	I0116 03:09:47.744625    5244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13508.pem
	I0116 03:09:47.752097    5244 command_runner.go:130] > 51391683
	I0116 03:09:47.765669    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13508.pem /etc/ssl/certs/51391683.0"
	I0116 03:09:47.793393    5244 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:09:47.801229    5244 command_runner.go:130] > ca.crt
	I0116 03:09:47.801229    5244 command_runner.go:130] > ca.key
	I0116 03:09:47.801432    5244 command_runner.go:130] > healthcheck-client.crt
	I0116 03:09:47.801432    5244 command_runner.go:130] > healthcheck-client.key
	I0116 03:09:47.801432    5244 command_runner.go:130] > peer.crt
	I0116 03:09:47.801432    5244 command_runner.go:130] > peer.key
	I0116 03:09:47.801432    5244 command_runner.go:130] > server.crt
	I0116 03:09:47.801432    5244 command_runner.go:130] > server.key
	I0116 03:09:47.813629    5244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0116 03:09:47.822072    5244 command_runner.go:130] > Certificate will not expire
	I0116 03:09:47.838317    5244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0116 03:09:47.849799    5244 command_runner.go:130] > Certificate will not expire
	I0116 03:09:47.861447    5244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0116 03:09:47.869153    5244 command_runner.go:130] > Certificate will not expire
	I0116 03:09:47.885376    5244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0116 03:09:47.893857    5244 command_runner.go:130] > Certificate will not expire
	I0116 03:09:47.906476    5244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0116 03:09:47.913329    5244 command_runner.go:130] > Certificate will not expire
	I0116 03:09:47.926921    5244 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0116 03:09:47.936238    5244 command_runner.go:130] > Certificate will not expire
	I0116 03:09:47.937543    5244 kubeadm.go:404] StartCluster: {Name:multinode-853900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.125.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.122.78 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.116.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:09:47.947829    5244 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0116 03:09:47.988086    5244 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0116 03:09:48.005149    5244 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0116 03:09:48.005149    5244 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0116 03:09:48.005149    5244 command_runner.go:130] > /var/lib/minikube/etcd:
	I0116 03:09:48.005433    5244 command_runner.go:130] > member
	I0116 03:09:48.005540    5244 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0116 03:09:48.005729    5244 kubeadm.go:636] restartCluster start
	I0116 03:09:48.023628    5244 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0116 03:09:48.047926    5244 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0116 03:09:48.048961    5244 kubeconfig.go:135] verify returned: extract IP: "multinode-853900" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:09:48.049518    5244 kubeconfig.go:146] "multinode-853900" context is missing from C:\Users\jenkins.minikube3\minikube-integration\kubeconfig - will repair!
	I0116 03:09:48.049558    5244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:09:48.063948    5244 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:09:48.063948    5244 kapi.go:59] client config for multinode-853900: &rest.Config{Host:"https://172.27.125.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900/client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900/client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x270c520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:09:48.066050    5244 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 03:09:48.078401    5244 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:09:48.096679    5244 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0116 03:09:48.096710    5244 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0116 03:09:48.096710    5244 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0116 03:09:48.096710    5244 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0116 03:09:48.096710    5244 command_runner.go:130] >  kind: InitConfiguration
	I0116 03:09:48.096710    5244 command_runner.go:130] >  localAPIEndpoint:
	I0116 03:09:48.096710    5244 command_runner.go:130] > -  advertiseAddress: 172.27.112.69
	I0116 03:09:48.096710    5244 command_runner.go:130] > +  advertiseAddress: 172.27.125.182
	I0116 03:09:48.096710    5244 command_runner.go:130] >    bindPort: 8443
	I0116 03:09:48.096710    5244 command_runner.go:130] >  bootstrapTokens:
	I0116 03:09:48.096710    5244 command_runner.go:130] >    - groups:
	I0116 03:09:48.096710    5244 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0116 03:09:48.096710    5244 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0116 03:09:48.096710    5244 command_runner.go:130] >    name: "multinode-853900"
	I0116 03:09:48.096710    5244 command_runner.go:130] >    kubeletExtraArgs:
	I0116 03:09:48.096710    5244 command_runner.go:130] > -    node-ip: 172.27.112.69
	I0116 03:09:48.096710    5244 command_runner.go:130] > +    node-ip: 172.27.125.182
	I0116 03:09:48.096710    5244 command_runner.go:130] >    taints: []
	I0116 03:09:48.096710    5244 command_runner.go:130] >  ---
	I0116 03:09:48.096710    5244 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0116 03:09:48.096710    5244 command_runner.go:130] >  kind: ClusterConfiguration
	I0116 03:09:48.096710    5244 command_runner.go:130] >  apiServer:
	I0116 03:09:48.096710    5244 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.27.112.69"]
	I0116 03:09:48.096710    5244 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.27.125.182"]
	I0116 03:09:48.096710    5244 command_runner.go:130] >    extraArgs:
	I0116 03:09:48.096710    5244 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0116 03:09:48.096710    5244 command_runner.go:130] >  controllerManager:
	I0116 03:09:48.096710    5244 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.27.112.69
	+  advertiseAddress: 172.27.125.182
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-853900"
	   kubeletExtraArgs:
	-    node-ip: 172.27.112.69
	+    node-ip: 172.27.125.182
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.27.112.69"]
	+  certSANs: ["127.0.0.1", "localhost", "172.27.125.182"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0116 03:09:48.096710    5244 kubeadm.go:1135] stopping kube-system containers ...
	I0116 03:09:48.108154    5244 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0116 03:09:48.140451    5244 command_runner.go:130] > 7c4c2a1e9df5
	I0116 03:09:48.140451    5244 command_runner.go:130] > c7157c42967e
	I0116 03:09:48.140540    5244 command_runner.go:130] > 71976a8048bc
	I0116 03:09:48.140540    5244 command_runner.go:130] > df918467deeb
	I0116 03:09:48.140540    5244 command_runner.go:130] > d0b6d500287e
	I0116 03:09:48.140540    5244 command_runner.go:130] > e4eefc8ffba8
	I0116 03:09:48.140540    5244 command_runner.go:130] > a5cb81c4b523
	I0116 03:09:48.140540    5244 command_runner.go:130] > b1892deb5d5d
	I0116 03:09:48.140540    5244 command_runner.go:130] > 7f4701153287
	I0116 03:09:48.140597    5244 command_runner.go:130] > dcdfa712e694
	I0116 03:09:48.140597    5244 command_runner.go:130] > f8ce77440648
	I0116 03:09:48.140597    5244 command_runner.go:130] > e829a48e9f66
	I0116 03:09:48.140597    5244 command_runner.go:130] > bcdc39931f56
	I0116 03:09:48.140597    5244 command_runner.go:130] > 7477cf965214
	I0116 03:09:48.140655    5244 command_runner.go:130] > 9f0b4732ffdd
	I0116 03:09:48.140655    5244 command_runner.go:130] > c04c5d960883
	I0116 03:09:48.140719    5244 docker.go:483] Stopping containers: [7c4c2a1e9df5 c7157c42967e 71976a8048bc df918467deeb d0b6d500287e e4eefc8ffba8 a5cb81c4b523 b1892deb5d5d 7f4701153287 dcdfa712e694 f8ce77440648 e829a48e9f66 bcdc39931f56 7477cf965214 9f0b4732ffdd c04c5d960883]
	I0116 03:09:48.151904    5244 ssh_runner.go:195] Run: docker stop 7c4c2a1e9df5 c7157c42967e 71976a8048bc df918467deeb d0b6d500287e e4eefc8ffba8 a5cb81c4b523 b1892deb5d5d 7f4701153287 dcdfa712e694 f8ce77440648 e829a48e9f66 bcdc39931f56 7477cf965214 9f0b4732ffdd c04c5d960883
	I0116 03:09:48.178509    5244 command_runner.go:130] > 7c4c2a1e9df5
	I0116 03:09:48.179158    5244 command_runner.go:130] > c7157c42967e
	I0116 03:09:48.179158    5244 command_runner.go:130] > 71976a8048bc
	I0116 03:09:48.179158    5244 command_runner.go:130] > df918467deeb
	I0116 03:09:48.179158    5244 command_runner.go:130] > d0b6d500287e
	I0116 03:09:48.179158    5244 command_runner.go:130] > e4eefc8ffba8
	I0116 03:09:48.179158    5244 command_runner.go:130] > a5cb81c4b523
	I0116 03:09:48.179158    5244 command_runner.go:130] > b1892deb5d5d
	I0116 03:09:48.179158    5244 command_runner.go:130] > 7f4701153287
	I0116 03:09:48.179158    5244 command_runner.go:130] > dcdfa712e694
	I0116 03:09:48.179158    5244 command_runner.go:130] > f8ce77440648
	I0116 03:09:48.179158    5244 command_runner.go:130] > e829a48e9f66
	I0116 03:09:48.179275    5244 command_runner.go:130] > bcdc39931f56
	I0116 03:09:48.179275    5244 command_runner.go:130] > 7477cf965214
	I0116 03:09:48.179275    5244 command_runner.go:130] > 9f0b4732ffdd
	I0116 03:09:48.179275    5244 command_runner.go:130] > c04c5d960883
	I0116 03:09:48.193155    5244 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0116 03:09:48.232922    5244 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0116 03:09:48.247518    5244 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0116 03:09:48.247518    5244 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0116 03:09:48.247518    5244 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0116 03:09:48.247518    5244 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:09:48.247518    5244 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0116 03:09:48.262207    5244 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0116 03:09:48.277262    5244 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0116 03:09:48.277466    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:09:48.674167    5244 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0116 03:09:48.675206    5244 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0116 03:09:48.675206    5244 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0116 03:09:48.675206    5244 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0116 03:09:48.675206    5244 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0116 03:09:48.675259    5244 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0116 03:09:48.675259    5244 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0116 03:09:48.675292    5244 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0116 03:09:48.675292    5244 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0116 03:09:48.675292    5244 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0116 03:09:48.675292    5244 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0116 03:09:48.675466    5244 command_runner.go:130] > [certs] Using the existing "sa" key
	I0116 03:09:48.675594    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:09:48.743623    5244 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0116 03:09:48.844661    5244 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0116 03:09:48.934815    5244 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0116 03:09:49.289134    5244 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0116 03:09:49.589023    5244 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0116 03:09:49.593023    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:09:49.834082    5244 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:09:49.834082    5244 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:09:49.834082    5244 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 03:09:49.834082    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:09:49.923207    5244 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0116 03:09:49.923237    5244 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0116 03:09:49.923237    5244 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0116 03:09:49.923237    5244 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0116 03:09:49.923237    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:09:50.006989    5244 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0116 03:09:50.007102    5244 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:09:50.024717    5244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:09:50.528625    5244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:09:51.036201    5244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:09:51.526491    5244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:09:52.037717    5244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:09:52.531970    5244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:09:53.027801    5244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:09:53.057458    5244 command_runner.go:130] > 1889
	I0116 03:09:53.057458    5244 api_server.go:72] duration metric: took 3.0503359s to wait for apiserver process to appear ...
	I0116 03:09:53.057458    5244 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:09:53.057458    5244 api_server.go:253] Checking apiserver healthz at https://172.27.125.182:8443/healthz ...
	I0116 03:09:56.260857    5244 api_server.go:279] https://172.27.125.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:09:56.260857    5244 api_server.go:103] status: https://172.27.125.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:09:56.260857    5244 api_server.go:253] Checking apiserver healthz at https://172.27.125.182:8443/healthz ...
	I0116 03:09:56.271874    5244 api_server.go:279] https://172.27.125.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0116 03:09:56.271874    5244 api_server.go:103] status: https://172.27.125.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0116 03:09:56.562630    5244 api_server.go:253] Checking apiserver healthz at https://172.27.125.182:8443/healthz ...
	I0116 03:09:56.578688    5244 api_server.go:279] https://172.27.125.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:09:56.578688    5244 api_server.go:103] status: https://172.27.125.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:09:57.071822    5244 api_server.go:253] Checking apiserver healthz at https://172.27.125.182:8443/healthz ...
	I0116 03:09:57.081307    5244 api_server.go:279] https://172.27.125.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0116 03:09:57.081307    5244 api_server.go:103] status: https://172.27.125.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0116 03:09:57.566404    5244 api_server.go:253] Checking apiserver healthz at https://172.27.125.182:8443/healthz ...
	I0116 03:09:57.575609    5244 api_server.go:279] https://172.27.125.182:8443/healthz returned 200:
	ok
	I0116 03:09:57.576291    5244 round_trippers.go:463] GET https://172.27.125.182:8443/version
	I0116 03:09:57.576291    5244 round_trippers.go:469] Request Headers:
	I0116 03:09:57.576353    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:57.576353    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:09:57.590320    5244 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0116 03:09:57.590736    5244 round_trippers.go:577] Response Headers:
	I0116 03:09:57.590736    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:09:57.590793    5244 round_trippers.go:580]     Content-Length: 264
	I0116 03:09:57.590793    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:57 GMT
	I0116 03:09:57.590793    5244 round_trippers.go:580]     Audit-Id: 17281399-89a8-4dcc-a440-e4b18864a1b9
	I0116 03:09:57.590793    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:57.590793    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:57.590857    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:09:57.590857    5244 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0116 03:09:57.591044    5244 api_server.go:141] control plane version: v1.28.4
	I0116 03:09:57.591107    5244 api_server.go:131] duration metric: took 4.5336185s to wait for apiserver health ...
	I0116 03:09:57.591107    5244 cni.go:84] Creating CNI manager for ""
	I0116 03:09:57.591167    5244 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:09:57.591410    5244 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0116 03:09:57.606636    5244 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 03:09:57.613642    5244 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 03:09:57.613642    5244 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 03:09:57.613642    5244 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 03:09:57.613642    5244 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:09:57.613642    5244 command_runner.go:130] > Access: 2024-01-16 03:08:31.256896000 +0000
	I0116 03:09:57.613642    5244 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 03:09:57.613642    5244 command_runner.go:130] > Change: 2024-01-16 03:08:20.148000000 +0000
	I0116 03:09:57.613642    5244 command_runner.go:130] >  Birth: -
	I0116 03:09:57.613642    5244 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 03:09:57.613642    5244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 03:09:57.678848    5244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 03:09:59.461580    5244 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:09:59.461580    5244 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:09:59.461580    5244 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 03:09:59.461580    5244 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 03:09:59.461580    5244 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.7827203s)
	I0116 03:09:59.461720    5244 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:09:59.461720    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods
	I0116 03:09:59.461720    5244 round_trippers.go:469] Request Headers:
	I0116 03:09:59.461720    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:59.461720    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:09:59.468325    5244 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 03:09:59.468820    5244 round_trippers.go:577] Response Headers:
	I0116 03:09:59.468820    5244 round_trippers.go:580]     Audit-Id: 0de8da60-979e-4bd6-9994-937c1e9a6148
	I0116 03:09:59.468820    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:59.468820    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:59.468820    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:09:59.468820    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:09:59.468820    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:59 GMT
	I0116 03:09:59.470553    5244 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1698"},"items":[{"metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"1665","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84158 chars]
	I0116 03:09:59.476796    5244 system_pods.go:59] 12 kube-system pods found
	I0116 03:09:59.476796    5244 system_pods.go:61] "coredns-5dd5756b68-62jpz" [c028c1eb-0071-40bf-a163-6f71a10dc945] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0116 03:09:59.476796    5244 system_pods.go:61] "etcd-multinode-853900" [0830a000-5e72-4c45-a843-1dd557d188eb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0116 03:09:59.476796    5244 system_pods.go:61] "kindnet-6s9wr" [3ff3a556-6003-48f3-8035-67ce0ff9bc49] Running
	I0116 03:09:59.476796    5244 system_pods.go:61] "kindnet-b8hwf" [5cc5feaf-8ff7-4d36-8e75-3fd1bd07d2ec] Running
	I0116 03:09:59.476796    5244 system_pods.go:61] "kindnet-x5nvv" [2c841275-aff6-41c4-a995-5265f31aaa2d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0116 03:09:59.476796    5244 system_pods.go:61] "kube-apiserver-multinode-853900" [a437ff8c-f27b-433b-97ac-dae3d276bc92] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0116 03:09:59.476796    5244 system_pods.go:61] "kube-controller-manager-multinode-853900" [5a4d4e86-9836-401a-8d98-1519ff75a0ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0116 03:09:59.476796    5244 system_pods.go:61] "kube-proxy-h977r" [5434ef27-d483-46c1-a95d-bd86163ee965] Running
	I0116 03:09:59.476796    5244 system_pods.go:61] "kube-proxy-rfglr" [80452c87-583e-40d7-aec9-4c790772a538] Running
	I0116 03:09:59.476796    5244 system_pods.go:61] "kube-proxy-tpc2g" [0cb279ef-9d3a-4c55-9c57-ce7eede8a052] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0116 03:09:59.476796    5244 system_pods.go:61] "kube-scheduler-multinode-853900" [d75db7e3-c171-428f-9c08-f268ce16da31] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0116 03:09:59.476796    5244 system_pods.go:61] "storage-provisioner" [5a08e24f-688d-4839-9157-d9a0b92bd32c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0116 03:09:59.476796    5244 system_pods.go:74] duration metric: took 15.0758ms to wait for pod list to return data ...
	I0116 03:09:59.476796    5244 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:09:59.477331    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes
	I0116 03:09:59.477331    5244 round_trippers.go:469] Request Headers:
	I0116 03:09:59.477331    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:59.477331    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:09:59.480583    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:09:59.481394    5244 round_trippers.go:577] Response Headers:
	I0116 03:09:59.481394    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:09:59.481394    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:09:59.481394    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:59 GMT
	I0116 03:09:59.481394    5244 round_trippers.go:580]     Audit-Id: f170cf2a-2d29-4c2c-94ed-8c850a979ff8
	I0116 03:09:59.481394    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:59.481468    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:59.481913    5244 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1698"},"items":[{"metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14857 chars]
	I0116 03:09:59.483443    5244 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:09:59.483443    5244 node_conditions.go:123] node cpu capacity is 2
	I0116 03:09:59.483515    5244 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:09:59.483552    5244 node_conditions.go:123] node cpu capacity is 2
	I0116 03:09:59.483552    5244 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:09:59.483552    5244 node_conditions.go:123] node cpu capacity is 2
	I0116 03:09:59.483552    5244 node_conditions.go:105] duration metric: took 6.7561ms to run NodePressure ...
	I0116 03:09:59.483636    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0116 03:09:59.758400    5244 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0116 03:09:59.758455    5244 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0116 03:09:59.758455    5244 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0116 03:09:59.758455    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0116 03:09:59.758455    5244 round_trippers.go:469] Request Headers:
	I0116 03:09:59.758455    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:59.758455    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:09:59.763088    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:09:59.763088    5244 round_trippers.go:577] Response Headers:
	I0116 03:09:59.763088    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:09:59.763088    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:09:59.763657    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:59 GMT
	I0116 03:09:59.763657    5244 round_trippers.go:580]     Audit-Id: 80655970-c303-4b9d-afd6-16183eb08aeb
	I0116 03:09:59.763657    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:59.763657    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:59.764336    5244 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1700"},"items":[{"metadata":{"name":"etcd-multinode-853900","namespace":"kube-system","uid":"0830a000-5e72-4c45-a843-1dd557d188eb","resourceVersion":"1652","creationTimestamp":"2024-01-16T03:09:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.125.182:2379","kubernetes.io/config.hash":"69d98d086aafe436cd9405e0584ec9d9","kubernetes.io/config.mirror":"69d98d086aafe436cd9405e0584ec9d9","kubernetes.io/config.seen":"2024-01-16T03:09:50.494161665Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:09:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 29372 chars]
	I0116 03:09:59.765817    5244 kubeadm.go:787] kubelet initialised
	I0116 03:09:59.765817    5244 kubeadm.go:788] duration metric: took 7.3618ms waiting for restarted kubelet to initialise ...
	I0116 03:09:59.765817    5244 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:09:59.765817    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods
	I0116 03:09:59.765817    5244 round_trippers.go:469] Request Headers:
	I0116 03:09:59.765817    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:59.765817    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:09:59.771559    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:09:59.771607    5244 round_trippers.go:577] Response Headers:
	I0116 03:09:59.771607    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:59 GMT
	I0116 03:09:59.771607    5244 round_trippers.go:580]     Audit-Id: 62ffe198-46f6-4735-ae6b-2b88d885a176
	I0116 03:09:59.771607    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:59.771607    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:59.771607    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:09:59.771607    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:09:59.773603    5244 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1700"},"items":[{"metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"1665","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84158 chars]
	I0116 03:09:59.777460    5244 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:59.777517    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-62jpz
	I0116 03:09:59.777710    5244 round_trippers.go:469] Request Headers:
	I0116 03:09:59.777753    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:59.777753    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:09:59.780112    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:09:59.780866    5244 round_trippers.go:577] Response Headers:
	I0116 03:09:59.780866    5244 round_trippers.go:580]     Audit-Id: 165265e7-ac7f-440e-9e61-458c54375abc
	I0116 03:09:59.780866    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:59.780866    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:59.780866    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:09:59.780953    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:09:59.780953    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:59 GMT
	I0116 03:09:59.781217    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"1665","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0116 03:09:59.781805    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:09:59.781805    5244 round_trippers.go:469] Request Headers:
	I0116 03:09:59.781805    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:59.781805    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:09:59.785961    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:09:59.785961    5244 round_trippers.go:577] Response Headers:
	I0116 03:09:59.785961    5244 round_trippers.go:580]     Audit-Id: ca9e4415-baa7-42ad-af84-c6171b776ce2
	I0116 03:09:59.785961    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:59.785961    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:59.785961    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:09:59.785961    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:09:59.785961    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:59 GMT
	I0116 03:09:59.785961    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:09:59.786804    5244 pod_ready.go:97] node "multinode-853900" hosting pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900" has status "Ready":"False"
	I0116 03:09:59.786804    5244 pod_ready.go:81] duration metric: took 9.2867ms waiting for pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace to be "Ready" ...
	E0116 03:09:59.786804    5244 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-853900" hosting pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900" has status "Ready":"False"
	I0116 03:09:59.786804    5244 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:59.786804    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-853900
	I0116 03:09:59.786804    5244 round_trippers.go:469] Request Headers:
	I0116 03:09:59.786804    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:09:59.786804    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:59.790806    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:09:59.790806    5244 round_trippers.go:577] Response Headers:
	I0116 03:09:59.790806    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:09:59.790806    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:59 GMT
	I0116 03:09:59.790806    5244 round_trippers.go:580]     Audit-Id: d2eea1c1-543e-4e02-b2d5-aa1acd6e7836
	I0116 03:09:59.790806    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:59.790806    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:59.790806    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:09:59.790806    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-853900","namespace":"kube-system","uid":"0830a000-5e72-4c45-a843-1dd557d188eb","resourceVersion":"1652","creationTimestamp":"2024-01-16T03:09:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.125.182:2379","kubernetes.io/config.hash":"69d98d086aafe436cd9405e0584ec9d9","kubernetes.io/config.mirror":"69d98d086aafe436cd9405e0584ec9d9","kubernetes.io/config.seen":"2024-01-16T03:09:50.494161665Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:09:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6097 chars]
	I0116 03:09:59.791695    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:09:59.791695    5244 round_trippers.go:469] Request Headers:
	I0116 03:09:59.791695    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:59.791695    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:09:59.795093    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:09:59.795093    5244 round_trippers.go:577] Response Headers:
	I0116 03:09:59.795093    5244 round_trippers.go:580]     Audit-Id: 67bf8c49-a221-435a-b0ba-ecdfa6c56016
	I0116 03:09:59.795157    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:59.795157    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:59.795157    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:09:59.795157    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:09:59.795213    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:59 GMT
	I0116 03:09:59.795422    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:09:59.795480    5244 pod_ready.go:97] node "multinode-853900" hosting pod "etcd-multinode-853900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900" has status "Ready":"False"
	I0116 03:09:59.795480    5244 pod_ready.go:81] duration metric: took 8.6756ms waiting for pod "etcd-multinode-853900" in "kube-system" namespace to be "Ready" ...
	E0116 03:09:59.795480    5244 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-853900" hosting pod "etcd-multinode-853900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900" has status "Ready":"False"
	I0116 03:09:59.796054    5244 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:59.796054    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-853900
	I0116 03:09:59.796054    5244 round_trippers.go:469] Request Headers:
	I0116 03:09:59.796054    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:59.796054    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:09:59.799969    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:09:59.800617    5244 round_trippers.go:577] Response Headers:
	I0116 03:09:59.800617    5244 round_trippers.go:580]     Audit-Id: eef735a1-09b8-422e-b367-38a7ac8763e4
	I0116 03:09:59.800617    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:59.800617    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:59.800702    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:09:59.800702    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:09:59.800702    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:59 GMT
	I0116 03:09:59.800702    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-853900","namespace":"kube-system","uid":"a437ff8c-f27b-433b-97ac-dae3d276bc92","resourceVersion":"1650","creationTimestamp":"2024-01-16T02:48:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.112.69:8443","kubernetes.io/config.hash":"41ea37f04f983128860ae937c9f060bb","kubernetes.io/config.mirror":"41ea37f04f983128860ae937c9f060bb","kubernetes.io/config.seen":"2024-01-16T02:48:00.146128309Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7648 chars]
	I0116 03:09:59.801426    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:09:59.801426    5244 round_trippers.go:469] Request Headers:
	I0116 03:09:59.801426    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:59.801426    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:09:59.804037    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:09:59.804037    5244 round_trippers.go:577] Response Headers:
	I0116 03:09:59.804736    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:09:59.804736    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:59 GMT
	I0116 03:09:59.804736    5244 round_trippers.go:580]     Audit-Id: 298ca9ae-1013-4b5f-92ae-5566cb096803
	I0116 03:09:59.804736    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:59.804736    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:59.804736    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:09:59.804873    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:09:59.804873    5244 pod_ready.go:97] node "multinode-853900" hosting pod "kube-apiserver-multinode-853900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900" has status "Ready":"False"
	I0116 03:09:59.804873    5244 pod_ready.go:81] duration metric: took 8.8191ms waiting for pod "kube-apiserver-multinode-853900" in "kube-system" namespace to be "Ready" ...
	E0116 03:09:59.804873    5244 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-853900" hosting pod "kube-apiserver-multinode-853900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900" has status "Ready":"False"
	I0116 03:09:59.804873    5244 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:09:59.805499    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-853900
	I0116 03:09:59.805623    5244 round_trippers.go:469] Request Headers:
	I0116 03:09:59.805623    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:59.805623    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:09:59.808482    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:09:59.808482    5244 round_trippers.go:577] Response Headers:
	I0116 03:09:59.808482    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:59.808482    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:59.808482    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:09:59.808482    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:09:59.808482    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:59 GMT
	I0116 03:09:59.808482    5244 round_trippers.go:580]     Audit-Id: 0cf6e6e5-46a4-4883-9c5a-ae8122f0ac65
	I0116 03:09:59.809461    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-853900","namespace":"kube-system","uid":"5a4d4e86-9836-401a-8d98-1519ff75a0ec","resourceVersion":"1644","creationTimestamp":"2024-01-16T02:48:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f09e1ab837c9ef5b247e4d57afe8993b","kubernetes.io/config.mirror":"f09e1ab837c9ef5b247e4d57afe8993b","kubernetes.io/config.seen":"2024-01-16T02:48:00.146129509Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7441 chars]
	I0116 03:09:59.876540    5244 request.go:629] Waited for 66.7002ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:09:59.876605    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:09:59.876605    5244 round_trippers.go:469] Request Headers:
	I0116 03:09:59.876605    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:09:59.876605    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:09:59.881174    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:09:59.881578    5244 round_trippers.go:577] Response Headers:
	I0116 03:09:59.881578    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:09:59 GMT
	I0116 03:09:59.881643    5244 round_trippers.go:580]     Audit-Id: 70613cd9-31fb-4232-bf3f-936483f53369
	I0116 03:09:59.881643    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:09:59.881643    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:09:59.881643    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:09:59.881643    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:09:59.881960    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:09:59.882521    5244 pod_ready.go:97] node "multinode-853900" hosting pod "kube-controller-manager-multinode-853900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900" has status "Ready":"False"
	I0116 03:09:59.882573    5244 pod_ready.go:81] duration metric: took 77.1506ms waiting for pod "kube-controller-manager-multinode-853900" in "kube-system" namespace to be "Ready" ...
	E0116 03:09:59.882573    5244 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-853900" hosting pod "kube-controller-manager-multinode-853900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900" has status "Ready":"False"
	I0116 03:09:59.882661    5244 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h977r" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:00.065096    5244 request.go:629] Waited for 182.175ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h977r
	I0116 03:10:00.065162    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h977r
	I0116 03:10:00.065221    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:00.065221    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:00.065276    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:00.069509    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:00.069509    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:00.069509    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:00.069509    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:00.069977    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:00.069977    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:00.069977    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:00 GMT
	I0116 03:10:00.069977    5244 round_trippers.go:580]     Audit-Id: 07e40550-d36c-44d3-b127-69950f17d43a
	I0116 03:10:00.070331    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h977r","generateName":"kube-proxy-","namespace":"kube-system","uid":"5434ef27-d483-46c1-a95d-bd86163ee965","resourceVersion":"587","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0116 03:10:00.268910    5244 request.go:629] Waited for 198.1113ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:10:00.269024    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:10:00.269024    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:00.269024    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:00.269217    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:00.272490    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:00.272490    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:00.272490    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:00 GMT
	I0116 03:10:00.272490    5244 round_trippers.go:580]     Audit-Id: be93aaf0-0bb6-4e28-b8ff-ec505d23384b
	I0116 03:10:00.272490    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:00.273516    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:00.273516    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:00.273516    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:00.273738    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"1550","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_05_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3819 chars]
	I0116 03:10:00.274215    5244 pod_ready.go:92] pod "kube-proxy-h977r" in "kube-system" namespace has status "Ready":"True"
	I0116 03:10:00.274266    5244 pod_ready.go:81] duration metric: took 391.6023ms waiting for pod "kube-proxy-h977r" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:00.274266    5244 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-rfglr" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:00.472849    5244 request.go:629] Waited for 198.3113ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfglr
	I0116 03:10:00.473052    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfglr
	I0116 03:10:00.473105    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:00.473139    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:00.473139    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:00.477493    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:00.477549    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:00.477549    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:00 GMT
	I0116 03:10:00.477549    5244 round_trippers.go:580]     Audit-Id: 4108909c-4e81-4f39-a38c-1d32bc2ba5b3
	I0116 03:10:00.477549    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:00.477618    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:00.477618    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:00.477618    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:00.477964    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfglr","generateName":"kube-proxy-","namespace":"kube-system","uid":"80452c87-583e-40d7-aec9-4c790772a538","resourceVersion":"1552","creationTimestamp":"2024-01-16T02:55:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:55:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0116 03:10:00.675091    5244 request.go:629] Waited for 196.2779ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:10:00.675295    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:10:00.675295    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:00.675411    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:00.675411    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:00.681750    5244 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 03:10:00.681791    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:00.681791    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:00 GMT
	I0116 03:10:00.681830    5244 round_trippers.go:580]     Audit-Id: e4555cee-d7d8-4bbc-9d0d-1d3e65b8b587
	I0116 03:10:00.681830    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:00.681830    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:00.681861    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:00.681861    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:00.681861    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"99b99df3-ad5b-4c59-a7a0-406b850f5433","resourceVersion":"1574","creationTimestamp":"2024-01-16T03:05:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_05_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:05:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3635 chars]
	I0116 03:10:00.682478    5244 pod_ready.go:92] pod "kube-proxy-rfglr" in "kube-system" namespace has status "Ready":"True"
	I0116 03:10:00.682478    5244 pod_ready.go:81] duration metric: took 408.209ms waiting for pod "kube-proxy-rfglr" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:00.682478    5244 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tpc2g" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:00.864989    5244 request.go:629] Waited for 182.2861ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tpc2g
	I0116 03:10:00.865105    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tpc2g
	I0116 03:10:00.865164    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:00.865312    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:00.865312    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:00.868649    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:00.868649    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:00.868649    5244 round_trippers.go:580]     Audit-Id: d7f43629-9359-405c-865a-ffe785672e10
	I0116 03:10:00.868649    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:00.868649    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:00.868649    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:00.868649    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:00.868649    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:00 GMT
	I0116 03:10:00.868649    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tpc2g","generateName":"kube-proxy-","namespace":"kube-system","uid":"0cb279ef-9d3a-4c55-9c57-ce7eede8a052","resourceVersion":"1657","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5933 chars]
	I0116 03:10:01.070471    5244 request.go:629] Waited for 201.659ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:01.070608    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:01.070608    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:01.070608    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:01.070608    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:01.075784    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:10:01.075784    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:01.076027    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:01 GMT
	I0116 03:10:01.076027    5244 round_trippers.go:580]     Audit-Id: d1565323-8420-4684-8574-1c738f7c3b58
	I0116 03:10:01.076027    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:01.076065    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:01.076065    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:01.076065    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:01.076484    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:01.076986    5244 pod_ready.go:97] node "multinode-853900" hosting pod "kube-proxy-tpc2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900" has status "Ready":"False"
	I0116 03:10:01.077044    5244 pod_ready.go:81] duration metric: took 394.5635ms waiting for pod "kube-proxy-tpc2g" in "kube-system" namespace to be "Ready" ...
	E0116 03:10:01.077104    5244 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-853900" hosting pod "kube-proxy-tpc2g" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900" has status "Ready":"False"
	I0116 03:10:01.077104    5244 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:01.275910    5244 request.go:629] Waited for 198.5786ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-853900
	I0116 03:10:01.276159    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-853900
	I0116 03:10:01.276159    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:01.276228    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:01.276275    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:01.281075    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:01.281075    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:01.281075    5244 round_trippers.go:580]     Audit-Id: bf07e8a6-1b83-469b-9ed2-f028d2621008
	I0116 03:10:01.281075    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:01.281075    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:01.281075    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:01.281075    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:01.281075    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:01 GMT
	I0116 03:10:01.281075    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-853900","namespace":"kube-system","uid":"d75db7e3-c171-428f-9c08-f268ce16da31","resourceVersion":"1647","creationTimestamp":"2024-01-16T02:48:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aff36fe37a6d6fc8d309826a0f54f93d","kubernetes.io/config.mirror":"aff36fe37a6d6fc8d309826a0f54f93d","kubernetes.io/config.seen":"2024-01-16T02:48:09.211494477Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5153 chars]
	I0116 03:10:01.463455    5244 request.go:629] Waited for 182.177ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:01.463617    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:01.463617    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:01.463617    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:01.463617    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:01.467081    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:01.467081    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:01.467081    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:01.467081    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:01 GMT
	I0116 03:10:01.467081    5244 round_trippers.go:580]     Audit-Id: 038f6ae1-1876-4848-b954-f40590b40611
	I0116 03:10:01.467081    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:01.467081    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:01.467081    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:01.468105    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:01.468619    5244 pod_ready.go:97] node "multinode-853900" hosting pod "kube-scheduler-multinode-853900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900" has status "Ready":"False"
	I0116 03:10:01.468619    5244 pod_ready.go:81] duration metric: took 391.5129ms waiting for pod "kube-scheduler-multinode-853900" in "kube-system" namespace to be "Ready" ...
	E0116 03:10:01.468619    5244 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-853900" hosting pod "kube-scheduler-multinode-853900" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900" has status "Ready":"False"
	I0116 03:10:01.468619    5244 pod_ready.go:38] duration metric: took 1.7027914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:10:01.468619    5244 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0116 03:10:01.484871    5244 command_runner.go:130] > -16
	I0116 03:10:01.485528    5244 ops.go:34] apiserver oom_adj: -16
	I0116 03:10:01.485636    5244 kubeadm.go:640] restartCluster took 13.4798173s
	I0116 03:10:01.485636    5244 kubeadm.go:406] StartCluster complete in 13.5480031s
	I0116 03:10:01.485636    5244 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:10:01.485883    5244 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:10:01.487515    5244 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:10:01.488709    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0116 03:10:01.489118    5244 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0116 03:10:01.490039    5244 out.go:177] * Enabled addons: 
	I0116 03:10:01.490815    5244 addons.go:505] enable addons completed in 1.941ms: enabled=[]
	I0116 03:10:01.489185    5244 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:10:01.500625    5244 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:10:01.500991    5244 kapi.go:59] client config for multinode-853900: &rest.Config{Host:"https://172.27.125.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x270c520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:10:01.502631    5244 cert_rotation.go:137] Starting client certificate rotation controller
	I0116 03:10:01.503059    5244 round_trippers.go:463] GET https://172.27.125.182:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 03:10:01.503059    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:01.503059    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:01.503059    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:01.517210    5244 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0116 03:10:01.517210    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:01.517210    5244 round_trippers.go:580]     Audit-Id: 7ade5581-1715-485d-88f5-d6349c177762
	I0116 03:10:01.517210    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:01.517210    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:01.517210    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:01.517210    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:01.517210    5244 round_trippers.go:580]     Content-Length: 292
	I0116 03:10:01.517210    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:01 GMT
	I0116 03:10:01.517210    5244 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bb9e4be9-a821-417a-b943-b930d6cec07c","resourceVersion":"1699","creationTimestamp":"2024-01-16T02:48:09Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 03:10:01.517210    5244 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-853900" context rescaled to 1 replicas
	I0116 03:10:01.517210    5244 start.go:223] Will wait 6m0s for node &{Name: IP:172.27.125.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0116 03:10:01.518608    5244 out.go:177] * Verifying Kubernetes components...
	I0116 03:10:01.533541    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:10:01.645541    5244 command_runner.go:130] > apiVersion: v1
	I0116 03:10:01.645541    5244 command_runner.go:130] > data:
	I0116 03:10:01.645541    5244 command_runner.go:130] >   Corefile: |
	I0116 03:10:01.645541    5244 command_runner.go:130] >     .:53 {
	I0116 03:10:01.645541    5244 command_runner.go:130] >         log
	I0116 03:10:01.645541    5244 command_runner.go:130] >         errors
	I0116 03:10:01.645541    5244 command_runner.go:130] >         health {
	I0116 03:10:01.645541    5244 command_runner.go:130] >            lameduck 5s
	I0116 03:10:01.645541    5244 command_runner.go:130] >         }
	I0116 03:10:01.645541    5244 command_runner.go:130] >         ready
	I0116 03:10:01.645541    5244 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0116 03:10:01.645541    5244 command_runner.go:130] >            pods insecure
	I0116 03:10:01.645541    5244 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0116 03:10:01.645541    5244 command_runner.go:130] >            ttl 30
	I0116 03:10:01.645541    5244 command_runner.go:130] >         }
	I0116 03:10:01.645541    5244 command_runner.go:130] >         prometheus :9153
	I0116 03:10:01.645541    5244 command_runner.go:130] >         hosts {
	I0116 03:10:01.645541    5244 command_runner.go:130] >            172.27.112.1 host.minikube.internal
	I0116 03:10:01.645541    5244 command_runner.go:130] >            fallthrough
	I0116 03:10:01.645541    5244 command_runner.go:130] >         }
	I0116 03:10:01.645541    5244 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0116 03:10:01.645541    5244 command_runner.go:130] >            max_concurrent 1000
	I0116 03:10:01.645541    5244 command_runner.go:130] >         }
	I0116 03:10:01.645541    5244 command_runner.go:130] >         cache 30
	I0116 03:10:01.645541    5244 command_runner.go:130] >         loop
	I0116 03:10:01.645541    5244 command_runner.go:130] >         reload
	I0116 03:10:01.645541    5244 command_runner.go:130] >         loadbalance
	I0116 03:10:01.645541    5244 command_runner.go:130] >     }
	I0116 03:10:01.645541    5244 command_runner.go:130] > kind: ConfigMap
	I0116 03:10:01.645541    5244 command_runner.go:130] > metadata:
	I0116 03:10:01.645541    5244 command_runner.go:130] >   creationTimestamp: "2024-01-16T02:48:09Z"
	I0116 03:10:01.645541    5244 command_runner.go:130] >   name: coredns
	I0116 03:10:01.645541    5244 command_runner.go:130] >   namespace: kube-system
	I0116 03:10:01.645541    5244 command_runner.go:130] >   resourceVersion: "363"
	I0116 03:10:01.645541    5244 command_runner.go:130] >   uid: fe1f65b2-4581-48a1-8dac-27ca5a22cf1f
	I0116 03:10:01.646541    5244 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0116 03:10:01.646541    5244 node_ready.go:35] waiting up to 6m0s for node "multinode-853900" to be "Ready" ...
	I0116 03:10:01.666917    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:01.666917    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:01.666917    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:01.666917    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:01.670692    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:01.670692    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:01.670692    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:01.670692    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:01.670692    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:01.670692    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:01 GMT
	I0116 03:10:01.670692    5244 round_trippers.go:580]     Audit-Id: d1a2ac65-036b-4a46-8fc1-f81c37d6d344
	I0116 03:10:01.670692    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:01.671242    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:02.156502    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:02.156502    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:02.156806    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:02.156806    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:02.161089    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:02.161575    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:02.161575    5244 round_trippers.go:580]     Audit-Id: 256e5701-f25f-4fa7-845b-84b949b622c2
	I0116 03:10:02.161575    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:02.161575    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:02.161575    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:02.161575    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:02.161575    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:02 GMT
	I0116 03:10:02.162083    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:02.654267    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:02.654267    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:02.654267    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:02.654530    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:02.658938    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:02.659313    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:02.659313    5244 round_trippers.go:580]     Audit-Id: ff622b67-120a-47f7-a0c0-94a0db996ed7
	I0116 03:10:02.659313    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:02.659313    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:02.659313    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:02.659313    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:02.659313    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:02 GMT
	I0116 03:10:02.659585    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:03.152335    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:03.152417    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:03.152417    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:03.152417    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:03.157024    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:03.157092    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:03.157092    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:03.157092    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:03.157153    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:03.157181    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:03.157206    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:03 GMT
	I0116 03:10:03.157206    5244 round_trippers.go:580]     Audit-Id: 20a810bb-5dc8-4623-bb66-b9c1993a7d8f
	I0116 03:10:03.157311    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:03.652004    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:03.652080    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:03.652080    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:03.652080    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:03.656462    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:03.656462    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:03.656462    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:03.656462    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:03.656462    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:03.657032    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:03 GMT
	I0116 03:10:03.657032    5244 round_trippers.go:580]     Audit-Id: a2fc3581-b1f5-42ac-9ab0-8d93cbfbb654
	I0116 03:10:03.657032    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:03.657251    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:03.657433    5244 node_ready.go:58] node "multinode-853900" has status "Ready":"False"
	I0116 03:10:04.149872    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:04.149872    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:04.149872    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:04.149872    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:04.154502    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:04.155222    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:04.155347    5244 round_trippers.go:580]     Audit-Id: 29b096ab-5921-4b60-a0e8-599b27a6be30
	I0116 03:10:04.155383    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:04.155383    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:04.155463    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:04.155463    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:04.155463    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:04 GMT
	I0116 03:10:04.155463    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:04.651130    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:04.651218    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:04.651218    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:04.651218    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:04.655984    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:04.655984    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:04.656063    5244 round_trippers.go:580]     Audit-Id: b0a22596-bb64-4867-8104-ac4a96d248f4
	I0116 03:10:04.656063    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:04.656063    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:04.656063    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:04.656063    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:04.656063    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:04 GMT
	I0116 03:10:04.656063    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:05.150412    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:05.150483    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:05.150483    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:05.150483    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:05.154937    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:05.154937    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:05.155280    5244 round_trippers.go:580]     Audit-Id: e42eb35b-7ad6-4322-9b2a-916db7c9902f
	I0116 03:10:05.155280    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:05.155280    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:05.155280    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:05.155280    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:05.155280    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:05 GMT
	I0116 03:10:05.155575    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:05.651752    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:05.651752    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:05.651752    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:05.651752    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:05.655925    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:05.655925    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:05.655925    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:05.655925    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:05 GMT
	I0116 03:10:05.655925    5244 round_trippers.go:580]     Audit-Id: 79b8a93f-4158-4c39-b864-bcc8a70f1885
	I0116 03:10:05.655925    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:05.655925    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:05.655925    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:05.655925    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:06.151396    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:06.151458    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:06.151458    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:06.151458    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:06.155830    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:06.155830    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:06.155919    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:06.155939    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:06.155939    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:06 GMT
	I0116 03:10:06.155939    5244 round_trippers.go:580]     Audit-Id: 370e6d5c-2a2a-4dc7-a756-24ad0ad70dbf
	I0116 03:10:06.155939    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:06.156000    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:06.156271    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:06.156682    5244 node_ready.go:58] node "multinode-853900" has status "Ready":"False"
	I0116 03:10:06.651690    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:06.651771    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:06.651771    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:06.651854    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:06.658803    5244 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 03:10:06.658803    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:06.658803    5244 round_trippers.go:580]     Audit-Id: c71464ff-dde5-4d18-997d-152bd688c52c
	I0116 03:10:06.658803    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:06.658803    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:06.658803    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:06.658803    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:06.658803    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:06 GMT
	I0116 03:10:06.658803    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:07.153748    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:07.153839    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:07.153839    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:07.153839    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:07.158625    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:07.158720    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:07.158720    5244 round_trippers.go:580]     Audit-Id: 73b4234c-5f31-435f-8f25-f103819cd3f1
	I0116 03:10:07.158720    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:07.158720    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:07.158720    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:07.158720    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:07.158720    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:07 GMT
	I0116 03:10:07.159124    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:07.654655    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:07.654910    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:07.654910    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:07.654910    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:07.658460    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:07.658460    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:07.658460    5244 round_trippers.go:580]     Audit-Id: 064fc34d-42b1-4973-a472-b3739ec15ada
	I0116 03:10:07.658460    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:07.658460    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:07.658871    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:07.658871    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:07.658871    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:07 GMT
	I0116 03:10:07.659004    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:08.151046    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:08.151046    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:08.151046    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:08.151046    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:08.155781    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:08.155781    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:08.156192    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:08 GMT
	I0116 03:10:08.156192    5244 round_trippers.go:580]     Audit-Id: bb117a9d-fd98-45b5-b18d-2a5ca6266396
	I0116 03:10:08.156192    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:08.156192    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:08.156192    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:08.156265    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:08.156530    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:08.156947    5244 node_ready.go:58] node "multinode-853900" has status "Ready":"False"
	I0116 03:10:08.651306    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:08.651306    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:08.651391    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:08.651391    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:08.657877    5244 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 03:10:08.657877    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:08.657877    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:08.657877    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:08.657877    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:08 GMT
	I0116 03:10:08.657877    5244 round_trippers.go:580]     Audit-Id: a3ab5640-d041-4652-83f3-4e83ba61d435
	I0116 03:10:08.657877    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:08.657877    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:08.658408    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1630","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0116 03:10:09.154036    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:09.154036    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:09.154036    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:09.154036    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:09.159603    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:10:09.159603    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:09.159785    5244 round_trippers.go:580]     Audit-Id: fa38826b-d1a9-4825-b5c7-d3948bdbc20a
	I0116 03:10:09.159785    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:09.159785    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:09.159785    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:09.159785    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:09.159785    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:09 GMT
	I0116 03:10:09.160141    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:09.656401    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:09.656401    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:09.656401    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:09.656644    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:09.662257    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:10:09.662257    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:09.662257    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:09.662257    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:09 GMT
	I0116 03:10:09.662257    5244 round_trippers.go:580]     Audit-Id: 2b299451-5ab3-47c8-88c9-da49ab780b4e
	I0116 03:10:09.662257    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:09.662257    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:09.662257    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:09.662794    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:10.157778    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:10.157867    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:10.157867    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:10.157867    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:10.161626    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:10.161677    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:10.161677    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:10.161677    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:10 GMT
	I0116 03:10:10.161677    5244 round_trippers.go:580]     Audit-Id: 75b300cd-8b0d-43ba-8d4f-f25f7dbb5596
	I0116 03:10:10.161677    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:10.161677    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:10.161677    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:10.162761    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:10.163416    5244 node_ready.go:58] node "multinode-853900" has status "Ready":"False"
	I0116 03:10:10.661855    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:10.661961    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:10.661961    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:10.662083    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:10.665766    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:10.665766    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:10.665860    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:10.665860    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:10.665860    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:10.665860    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:10.665950    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:10 GMT
	I0116 03:10:10.665950    5244 round_trippers.go:580]     Audit-Id: 402b2e65-fd99-431a-9cd8-4d9446bcb87f
	I0116 03:10:10.666136    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:11.148420    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:11.148420    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:11.148420    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:11.148420    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:11.153101    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:11.153101    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:11.153101    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:11.153101    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:11.153101    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:11.153101    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:11.153101    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:11 GMT
	I0116 03:10:11.153101    5244 round_trippers.go:580]     Audit-Id: 13c5189d-39e4-446a-8ffe-9540c15c7168
	I0116 03:10:11.153101    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:11.653121    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:11.653121    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:11.653121    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:11.653121    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:11.657697    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:11.657697    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:11.657766    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:11.657766    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:11.657766    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:11.657766    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:11.657824    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:11 GMT
	I0116 03:10:11.657824    5244 round_trippers.go:580]     Audit-Id: a0476187-ba4a-498e-bccf-77c5d0ca4cd8
	I0116 03:10:11.658024    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:12.152347    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:12.152424    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:12.152424    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:12.152529    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:12.156803    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:12.156803    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:12.156803    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:12.156803    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:12.157904    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:12.157904    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:12.157904    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:12 GMT
	I0116 03:10:12.157904    5244 round_trippers.go:580]     Audit-Id: 1f3deb6c-82c2-4890-a51a-eb3d113631b7
	I0116 03:10:12.158394    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:12.653095    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:12.653095    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:12.653095    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:12.653095    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:12.656936    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:12.656936    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:12.656936    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:12.657359    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:12 GMT
	I0116 03:10:12.657359    5244 round_trippers.go:580]     Audit-Id: 74917829-4999-485d-a010-ec3ec7b42a93
	I0116 03:10:12.657409    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:12.657409    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:12.657441    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:12.658439    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:12.659009    5244 node_ready.go:58] node "multinode-853900" has status "Ready":"False"
	I0116 03:10:13.160855    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:13.160855    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:13.160855    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:13.160855    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:13.164489    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:13.164489    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:13.164489    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:13.164489    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:13.164489    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:13.164880    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:13 GMT
	I0116 03:10:13.164880    5244 round_trippers.go:580]     Audit-Id: cdc68f4a-b3ac-495f-8139-b4c1222e0541
	I0116 03:10:13.164880    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:13.165114    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:13.654260    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:13.654363    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:13.654363    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:13.654363    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:13.658278    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:13.658278    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:13.658278    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:13.658278    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:13 GMT
	I0116 03:10:13.658278    5244 round_trippers.go:580]     Audit-Id: c6a19edd-0c5a-4a8e-92c8-0c67cab39395
	I0116 03:10:13.658278    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:13.658278    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:13.658766    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:13.658948    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:14.147890    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:14.148040    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:14.148040    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:14.148040    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:14.152042    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:14.152042    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:14.152042    5244 round_trippers.go:580]     Audit-Id: 48889891-37c9-4f36-a001-de1c1f4b47fc
	I0116 03:10:14.152042    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:14.152042    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:14.152042    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:14.152042    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:14.152042    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:14 GMT
	I0116 03:10:14.152478    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:14.651056    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:14.651181    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:14.651181    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:14.651181    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:14.655757    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:14.655757    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:14.655757    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:14 GMT
	I0116 03:10:14.655757    5244 round_trippers.go:580]     Audit-Id: 2a4a8249-0c49-4d06-a680-82fe9a7c6312
	I0116 03:10:14.655757    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:14.655757    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:14.655757    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:14.656268    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:14.656612    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:15.151286    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:15.151286    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:15.151286    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:15.151286    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:15.155694    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:15.156549    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:15.156549    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:15.156549    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:15.156549    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:15.156549    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:15.156549    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:15 GMT
	I0116 03:10:15.156549    5244 round_trippers.go:580]     Audit-Id: fb7b4826-89e9-4595-a306-a2fb01e71f09
	I0116 03:10:15.156854    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:15.157606    5244 node_ready.go:58] node "multinode-853900" has status "Ready":"False"
	I0116 03:10:15.648487    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:15.648487    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:15.648487    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:15.648487    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:15.652364    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:15.652364    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:15.653072    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:15.653072    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:15 GMT
	I0116 03:10:15.653072    5244 round_trippers.go:580]     Audit-Id: 9c4a2783-b745-4250-98be-ae6bd5ff297b
	I0116 03:10:15.653072    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:15.653072    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:15.653072    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:15.653367    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:16.150349    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:16.150349    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:16.150349    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:16.150349    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:16.154916    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:16.154916    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:16.154916    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:16.154916    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:16.155061    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:16.155061    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:16.155061    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:16 GMT
	I0116 03:10:16.155129    5244 round_trippers.go:580]     Audit-Id: 36d6e3f3-bf02-46bd-aa5c-8ae1dfb70428
	I0116 03:10:16.155601    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:16.648437    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:16.648437    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:16.648576    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:16.648576    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:16.651732    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:16.651732    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:16.652087    5244 round_trippers.go:580]     Audit-Id: 7c417b43-c812-4eb6-a56a-02679f5f9c0f
	I0116 03:10:16.652087    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:16.652087    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:16.652087    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:16.652087    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:16.652087    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:16 GMT
	I0116 03:10:16.652648    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1742","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0116 03:10:17.149493    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:17.149493    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.149493    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.149493    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.153074    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:17.153422    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.153422    5244 round_trippers.go:580]     Audit-Id: c15452c3-059f-4730-966f-34accd2b640c
	I0116 03:10:17.153422    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.153422    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.153422    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.153537    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.153537    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.153673    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1771","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0116 03:10:17.154441    5244 node_ready.go:49] node "multinode-853900" has status "Ready":"True"
	I0116 03:10:17.154496    5244 node_ready.go:38] duration metric: took 15.5077977s waiting for node "multinode-853900" to be "Ready" ...
	I0116 03:10:17.154496    5244 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:10:17.154587    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods
	I0116 03:10:17.154643    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.154643    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.154695    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.160294    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:10:17.160294    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.160294    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.160294    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.160294    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.160294    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.160798    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.160798    5244 round_trippers.go:580]     Audit-Id: 20070abd-98da-4e09-8baf-b6fa7f96d4e8
	I0116 03:10:17.163727    5244 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1771"},"items":[{"metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"1761","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82547 chars]
	I0116 03:10:17.167028    5244 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:17.167607    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-62jpz
	I0116 03:10:17.167607    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.167607    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.167607    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.170982    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:10:17.170982    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.170982    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.170982    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.170982    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.170982    5244 round_trippers.go:580]     Audit-Id: 50feebe5-c12c-492c-abfd-8b25d1bd2281
	I0116 03:10:17.170982    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.170982    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.170982    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"1761","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0116 03:10:17.171702    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:17.171702    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.171702    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.171702    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.176137    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:17.176296    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.176296    5244 round_trippers.go:580]     Audit-Id: e2322b7d-3cef-4df2-a877-474244815861
	I0116 03:10:17.176296    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.176296    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.176296    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.176411    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.176426    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.176426    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1771","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0116 03:10:17.177314    5244 pod_ready.go:92] pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace has status "Ready":"True"
	I0116 03:10:17.177314    5244 pod_ready.go:81] duration metric: took 10.2863ms waiting for pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:17.177314    5244 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:17.177314    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-853900
	I0116 03:10:17.177314    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.177314    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.177314    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.181584    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:17.181584    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.181584    5244 round_trippers.go:580]     Audit-Id: c76c5cc4-ac92-43c0-80bf-eb6680491634
	I0116 03:10:17.181584    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.181584    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.181584    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.181584    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.181584    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.182286    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-853900","namespace":"kube-system","uid":"0830a000-5e72-4c45-a843-1dd557d188eb","resourceVersion":"1718","creationTimestamp":"2024-01-16T03:09:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.125.182:2379","kubernetes.io/config.hash":"69d98d086aafe436cd9405e0584ec9d9","kubernetes.io/config.mirror":"69d98d086aafe436cd9405e0584ec9d9","kubernetes.io/config.seen":"2024-01-16T03:09:50.494161665Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:09:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0116 03:10:17.182849    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:17.182849    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.182849    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.182849    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.185527    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:10:17.185527    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.185527    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.186239    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.186239    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.186239    5244 round_trippers.go:580]     Audit-Id: 4e48b41c-3719-4986-ab80-cd18bc1b1e46
	I0116 03:10:17.186239    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.186239    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.186597    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1771","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0116 03:10:17.187204    5244 pod_ready.go:92] pod "etcd-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 03:10:17.187204    5244 pod_ready.go:81] duration metric: took 9.8903ms waiting for pod "etcd-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:17.187267    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:17.187349    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-853900
	I0116 03:10:17.187349    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.187349    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.187430    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.190283    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:10:17.190283    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.190283    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.190283    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.190283    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.190283    5244 round_trippers.go:580]     Audit-Id: 246cb7a9-9fa1-4243-8dd4-e6b04d47f721
	I0116 03:10:17.190283    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.190283    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.190283    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-853900","namespace":"kube-system","uid":"cb2bb8c0-e51a-46cf-87f4-5c3ad0287455","resourceVersion":"1722","creationTimestamp":"2024-01-16T03:10:01Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.125.182:8443","kubernetes.io/config.hash":"e8b156384a67a45d4dc14390f3884653","kubernetes.io/config.mirror":"e8b156384a67a45d4dc14390f3884653","kubernetes.io/config.seen":"2024-01-16T03:09:50.494166665Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:10:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0116 03:10:17.190283    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:17.190283    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.190283    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.190283    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.193804    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:17.193804    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.193804    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.193804    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.193804    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.193804    5244 round_trippers.go:580]     Audit-Id: 248bb11f-bd68-4116-8d1f-9bcb16033667
	I0116 03:10:17.193804    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.193804    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.194847    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1771","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0116 03:10:17.195530    5244 pod_ready.go:92] pod "kube-apiserver-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 03:10:17.195530    5244 pod_ready.go:81] duration metric: took 8.2453ms waiting for pod "kube-apiserver-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:17.195599    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:17.195691    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-853900
	I0116 03:10:17.195691    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.195816    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.195816    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.198507    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:10:17.198507    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.198507    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.198507    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.198507    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.198507    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.198507    5244 round_trippers.go:580]     Audit-Id: 9b889734-e379-4426-aaef-4f30d3bf3ef3
	I0116 03:10:17.198507    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.199557    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-853900","namespace":"kube-system","uid":"5a4d4e86-9836-401a-8d98-1519ff75a0ec","resourceVersion":"1746","creationTimestamp":"2024-01-16T02:48:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f09e1ab837c9ef5b247e4d57afe8993b","kubernetes.io/config.mirror":"f09e1ab837c9ef5b247e4d57afe8993b","kubernetes.io/config.seen":"2024-01-16T02:48:00.146129509Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0116 03:10:17.200270    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:17.200270    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.200322    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.200322    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.202212    5244 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0116 03:10:17.202212    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.203218    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.203218    5244 round_trippers.go:580]     Audit-Id: b4742764-eb7b-4795-8679-448af42a690b
	I0116 03:10:17.203218    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.203218    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.203218    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.203218    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.203218    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1771","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0116 03:10:17.203218    5244 pod_ready.go:92] pod "kube-controller-manager-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 03:10:17.203218    5244 pod_ready.go:81] duration metric: took 7.6191ms waiting for pod "kube-controller-manager-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:17.203218    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h977r" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:17.350072    5244 request.go:629] Waited for 146.582ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h977r
	I0116 03:10:17.350202    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h977r
	I0116 03:10:17.350456    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.350456    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.350456    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.354925    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:17.355312    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.355312    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.355312    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.355312    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.355312    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.355402    5244 round_trippers.go:580]     Audit-Id: cb50ab3b-c7f1-47f5-9b87-9ab78a473f33
	I0116 03:10:17.355402    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.355514    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h977r","generateName":"kube-proxy-","namespace":"kube-system","uid":"5434ef27-d483-46c1-a95d-bd86163ee965","resourceVersion":"587","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0116 03:10:17.553139    5244 request.go:629] Waited for 196.3206ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:10:17.553374    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:10:17.553400    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.553466    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.553492    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.557370    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:17.557968    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.557968    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.557968    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.558192    5244 round_trippers.go:580]     Audit-Id: dc61572c-c97b-483c-96d7-be502266c767
	I0116 03:10:17.558192    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.558192    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.558192    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.558192    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84","resourceVersion":"1550","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_05_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3819 chars]
	I0116 03:10:17.559012    5244 pod_ready.go:92] pod "kube-proxy-h977r" in "kube-system" namespace has status "Ready":"True"
	I0116 03:10:17.559012    5244 pod_ready.go:81] duration metric: took 355.7922ms waiting for pod "kube-proxy-h977r" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:17.559012    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfglr" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:17.755474    5244 request.go:629] Waited for 196.28ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfglr
	I0116 03:10:17.755641    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfglr
	I0116 03:10:17.755641    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.755641    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.755710    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.760552    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:17.760718    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.760718    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.760718    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.760718    5244 round_trippers.go:580]     Audit-Id: 35b8283a-7890-46a2-948c-b5c6ad20e339
	I0116 03:10:17.760718    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.760718    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.760718    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.761047    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfglr","generateName":"kube-proxy-","namespace":"kube-system","uid":"80452c87-583e-40d7-aec9-4c790772a538","resourceVersion":"1552","creationTimestamp":"2024-01-16T02:55:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:55:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0116 03:10:17.956600    5244 request.go:629] Waited for 194.708ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:10:17.956891    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:10:17.956891    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:17.957043    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:17.957043    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:17.961618    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:17.961618    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:17.961618    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:17.961618    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:17.961696    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:17.961696    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:17 GMT
	I0116 03:10:17.961696    5244 round_trippers.go:580]     Audit-Id: 293f7311-b433-4c62-ad9b-b688e7665e88
	I0116 03:10:17.961696    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:17.961733    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"99b99df3-ad5b-4c59-a7a0-406b850f5433","resourceVersion":"1574","creationTimestamp":"2024-01-16T03:05:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_05_50_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:05:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3635 chars]
	I0116 03:10:17.962540    5244 pod_ready.go:92] pod "kube-proxy-rfglr" in "kube-system" namespace has status "Ready":"True"
	I0116 03:10:17.962540    5244 pod_ready.go:81] duration metric: took 403.5248ms waiting for pod "kube-proxy-rfglr" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:17.962540    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tpc2g" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:18.160784    5244 request.go:629] Waited for 197.5503ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tpc2g
	I0116 03:10:18.160910    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tpc2g
	I0116 03:10:18.160975    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:18.160975    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:18.160975    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:18.164383    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:18.165169    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:18.165169    5244 round_trippers.go:580]     Audit-Id: 5d8e5fbb-9f40-4f73-9d83-c4cbfef4b22b
	I0116 03:10:18.165169    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:18.165278    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:18.165278    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:18.165278    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:18.165278    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:18 GMT
	I0116 03:10:18.165572    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tpc2g","generateName":"kube-proxy-","namespace":"kube-system","uid":"0cb279ef-9d3a-4c55-9c57-ce7eede8a052","resourceVersion":"1708","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0116 03:10:18.363529    5244 request.go:629] Waited for 196.9429ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:18.363988    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:18.363988    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:18.363988    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:18.363988    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:18.367951    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:18.367951    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:18.368734    5244 round_trippers.go:580]     Audit-Id: 8fae95c3-6412-43ba-ac32-1555f72d613a
	I0116 03:10:18.368734    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:18.368734    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:18.368734    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:18.368734    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:18.368734    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:18 GMT
	I0116 03:10:18.369131    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1771","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0116 03:10:18.369131    5244 pod_ready.go:92] pod "kube-proxy-tpc2g" in "kube-system" namespace has status "Ready":"True"
	I0116 03:10:18.369666    5244 pod_ready.go:81] duration metric: took 407.1232ms waiting for pod "kube-proxy-tpc2g" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:18.369666    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:18.550274    5244 request.go:629] Waited for 180.3238ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-853900
	I0116 03:10:18.550539    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-853900
	I0116 03:10:18.550539    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:18.550539    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:18.550539    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:18.555952    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:10:18.555952    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:18.555952    5244 round_trippers.go:580]     Audit-Id: cf0210c4-5e35-4f8e-ac06-31890c3f2c34
	I0116 03:10:18.555952    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:18.555952    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:18.555952    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:18.556410    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:18.556410    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:18 GMT
	I0116 03:10:18.556635    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-853900","namespace":"kube-system","uid":"d75db7e3-c171-428f-9c08-f268ce16da31","resourceVersion":"1723","creationTimestamp":"2024-01-16T02:48:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aff36fe37a6d6fc8d309826a0f54f93d","kubernetes.io/config.mirror":"aff36fe37a6d6fc8d309826a0f54f93d","kubernetes.io/config.seen":"2024-01-16T02:48:09.211494477Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0116 03:10:18.753254    5244 request.go:629] Waited for 195.6695ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:18.753576    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:10:18.753576    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:18.753576    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:18.753576    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:18.757968    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:10:18.758422    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:18.758422    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:18.758422    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:18.758501    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:18 GMT
	I0116 03:10:18.758501    5244 round_trippers.go:580]     Audit-Id: 399ec13e-eb0a-47da-a6d7-8e65f95b6985
	I0116 03:10:18.758501    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:18.758501    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:18.758767    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1771","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0116 03:10:18.759248    5244 pod_ready.go:92] pod "kube-scheduler-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 03:10:18.759248    5244 pod_ready.go:81] duration metric: took 389.5795ms waiting for pod "kube-scheduler-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:10:18.759248    5244 pod_ready.go:38] duration metric: took 1.6047413s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:10:18.759354    5244 api_server.go:52] waiting for apiserver process to appear ...
	I0116 03:10:18.773449    5244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:10:18.795113    5244 command_runner.go:130] > 1889
	I0116 03:10:18.795170    5244 api_server.go:72] duration metric: took 17.277846s to wait for apiserver process to appear ...
	I0116 03:10:18.795170    5244 api_server.go:88] waiting for apiserver healthz status ...
	I0116 03:10:18.795263    5244 api_server.go:253] Checking apiserver healthz at https://172.27.125.182:8443/healthz ...
	I0116 03:10:18.804606    5244 api_server.go:279] https://172.27.125.182:8443/healthz returned 200:
	ok
	I0116 03:10:18.805677    5244 round_trippers.go:463] GET https://172.27.125.182:8443/version
	I0116 03:10:18.805677    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:18.805677    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:18.805677    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:18.807945    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:10:18.807945    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:18.807945    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:18 GMT
	I0116 03:10:18.808515    5244 round_trippers.go:580]     Audit-Id: daf7f2dd-491f-4cc7-9a9e-3e58fa2557fd
	I0116 03:10:18.808515    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:18.808515    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:18.808593    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:18.808593    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:18.808635    5244 round_trippers.go:580]     Content-Length: 264
	I0116 03:10:18.808799    5244 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0116 03:10:18.808880    5244 api_server.go:141] control plane version: v1.28.4
	I0116 03:10:18.808880    5244 api_server.go:131] duration metric: took 13.7093ms to wait for apiserver health ...
	I0116 03:10:18.808880    5244 system_pods.go:43] waiting for kube-system pods to appear ...
	I0116 03:10:18.954188    5244 request.go:629] Waited for 145.3072ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods
	I0116 03:10:18.954469    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods
	I0116 03:10:18.954469    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:18.954469    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:18.954469    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:18.964396    5244 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0116 03:10:18.964467    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:18.964467    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:18 GMT
	I0116 03:10:18.964519    5244 round_trippers.go:580]     Audit-Id: 234316ac-9f1b-4abf-88df-1282998c2432
	I0116 03:10:18.964574    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:18.964574    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:18.964623    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:18.964623    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:18.967103    5244 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1773"},"items":[{"metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"1761","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82547 chars]
	I0116 03:10:18.971486    5244 system_pods.go:59] 12 kube-system pods found
	I0116 03:10:18.971486    5244 system_pods.go:61] "coredns-5dd5756b68-62jpz" [c028c1eb-0071-40bf-a163-6f71a10dc945] Running
	I0116 03:10:18.971540    5244 system_pods.go:61] "etcd-multinode-853900" [0830a000-5e72-4c45-a843-1dd557d188eb] Running
	I0116 03:10:18.971540    5244 system_pods.go:61] "kindnet-6s9wr" [3ff3a556-6003-48f3-8035-67ce0ff9bc49] Running
	I0116 03:10:18.971540    5244 system_pods.go:61] "kindnet-b8hwf" [5cc5feaf-8ff7-4d36-8e75-3fd1bd07d2ec] Running
	I0116 03:10:18.971540    5244 system_pods.go:61] "kindnet-x5nvv" [2c841275-aff6-41c4-a995-5265f31aaa2d] Running
	I0116 03:10:18.971540    5244 system_pods.go:61] "kube-apiserver-multinode-853900" [cb2bb8c0-e51a-46cf-87f4-5c3ad0287455] Running
	I0116 03:10:18.971593    5244 system_pods.go:61] "kube-controller-manager-multinode-853900" [5a4d4e86-9836-401a-8d98-1519ff75a0ec] Running
	I0116 03:10:18.971593    5244 system_pods.go:61] "kube-proxy-h977r" [5434ef27-d483-46c1-a95d-bd86163ee965] Running
	I0116 03:10:18.971593    5244 system_pods.go:61] "kube-proxy-rfglr" [80452c87-583e-40d7-aec9-4c790772a538] Running
	I0116 03:10:18.971593    5244 system_pods.go:61] "kube-proxy-tpc2g" [0cb279ef-9d3a-4c55-9c57-ce7eede8a052] Running
	I0116 03:10:18.971643    5244 system_pods.go:61] "kube-scheduler-multinode-853900" [d75db7e3-c171-428f-9c08-f268ce16da31] Running
	I0116 03:10:18.971643    5244 system_pods.go:61] "storage-provisioner" [5a08e24f-688d-4839-9157-d9a0b92bd32c] Running
	I0116 03:10:18.971643    5244 system_pods.go:74] duration metric: took 162.7619ms to wait for pod list to return data ...
	I0116 03:10:18.971643    5244 default_sa.go:34] waiting for default service account to be created ...
	I0116 03:10:19.159073    5244 request.go:629] Waited for 187.1867ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/default/serviceaccounts
	I0116 03:10:19.159303    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/default/serviceaccounts
	I0116 03:10:19.159303    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:19.159303    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:19.159459    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:19.163169    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:10:19.163169    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:19.163169    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:19 GMT
	I0116 03:10:19.163169    5244 round_trippers.go:580]     Audit-Id: eede1447-c41c-40d9-a2a7-b425222c2fe5
	I0116 03:10:19.163169    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:19.163838    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:19.163838    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:19.163838    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:19.163838    5244 round_trippers.go:580]     Content-Length: 262
	I0116 03:10:19.163838    5244 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1777"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"e81a9b1b-f727-439a-873c-17af64fc234f","resourceVersion":"308","creationTimestamp":"2024-01-16T02:48:21Z"}}]}
	I0116 03:10:19.163973    5244 default_sa.go:45] found service account: "default"
	I0116 03:10:19.163973    5244 default_sa.go:55] duration metric: took 192.3292ms for default service account to be created ...
	I0116 03:10:19.163973    5244 system_pods.go:116] waiting for k8s-apps to be running ...
	I0116 03:10:19.361103    5244 request.go:629] Waited for 196.8258ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods
	I0116 03:10:19.361346    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods
	I0116 03:10:19.361346    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:19.361346    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:19.361346    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:19.368769    5244 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 03:10:19.368854    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:19.368854    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:19.368854    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:19.368854    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:19.368854    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:19.368854    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:19 GMT
	I0116 03:10:19.368933    5244 round_trippers.go:580]     Audit-Id: 85d8defb-6036-4c39-a5ac-c4f2258f5c0d
	I0116 03:10:19.370868    5244 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1777"},"items":[{"metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"1761","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82547 chars]
	I0116 03:10:19.374721    5244 system_pods.go:86] 12 kube-system pods found
	I0116 03:10:19.374721    5244 system_pods.go:89] "coredns-5dd5756b68-62jpz" [c028c1eb-0071-40bf-a163-6f71a10dc945] Running
	I0116 03:10:19.374892    5244 system_pods.go:89] "etcd-multinode-853900" [0830a000-5e72-4c45-a843-1dd557d188eb] Running
	I0116 03:10:19.374892    5244 system_pods.go:89] "kindnet-6s9wr" [3ff3a556-6003-48f3-8035-67ce0ff9bc49] Running
	I0116 03:10:19.374892    5244 system_pods.go:89] "kindnet-b8hwf" [5cc5feaf-8ff7-4d36-8e75-3fd1bd07d2ec] Running
	I0116 03:10:19.374892    5244 system_pods.go:89] "kindnet-x5nvv" [2c841275-aff6-41c4-a995-5265f31aaa2d] Running
	I0116 03:10:19.374892    5244 system_pods.go:89] "kube-apiserver-multinode-853900" [cb2bb8c0-e51a-46cf-87f4-5c3ad0287455] Running
	I0116 03:10:19.374892    5244 system_pods.go:89] "kube-controller-manager-multinode-853900" [5a4d4e86-9836-401a-8d98-1519ff75a0ec] Running
	I0116 03:10:19.374892    5244 system_pods.go:89] "kube-proxy-h977r" [5434ef27-d483-46c1-a95d-bd86163ee965] Running
	I0116 03:10:19.374892    5244 system_pods.go:89] "kube-proxy-rfglr" [80452c87-583e-40d7-aec9-4c790772a538] Running
	I0116 03:10:19.374892    5244 system_pods.go:89] "kube-proxy-tpc2g" [0cb279ef-9d3a-4c55-9c57-ce7eede8a052] Running
	I0116 03:10:19.374892    5244 system_pods.go:89] "kube-scheduler-multinode-853900" [d75db7e3-c171-428f-9c08-f268ce16da31] Running
	I0116 03:10:19.374892    5244 system_pods.go:89] "storage-provisioner" [5a08e24f-688d-4839-9157-d9a0b92bd32c] Running
	I0116 03:10:19.374892    5244 system_pods.go:126] duration metric: took 210.917ms to wait for k8s-apps to be running ...
	I0116 03:10:19.374892    5244 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:10:19.388743    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:10:19.411011    5244 system_svc.go:56] duration metric: took 36.1192ms WaitForService to wait for kubelet.
	I0116 03:10:19.411056    5244 kubeadm.go:581] duration metric: took 17.8937273s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:10:19.411270    5244 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:10:19.562833    5244 request.go:629] Waited for 151.3134ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes
	I0116 03:10:19.562995    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes
	I0116 03:10:19.562995    5244 round_trippers.go:469] Request Headers:
	I0116 03:10:19.562995    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:10:19.563045    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:10:19.569659    5244 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 03:10:19.569659    5244 round_trippers.go:577] Response Headers:
	I0116 03:10:19.570656    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:10:19 GMT
	I0116 03:10:19.570680    5244 round_trippers.go:580]     Audit-Id: 3e672243-73ca-40ff-ab1e-4a116fef9b1c
	I0116 03:10:19.570680    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:10:19.570680    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:10:19.570680    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:10:19.570680    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:10:19.571126    5244 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1777"},"items":[{"metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14730 chars]
	I0116 03:10:19.572256    5244 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:10:19.572256    5244 node_conditions.go:123] node cpu capacity is 2
	I0116 03:10:19.572256    5244 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:10:19.572256    5244 node_conditions.go:123] node cpu capacity is 2
	I0116 03:10:19.572256    5244 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:10:19.572256    5244 node_conditions.go:123] node cpu capacity is 2
	I0116 03:10:19.572256    5244 node_conditions.go:105] duration metric: took 160.9852ms to run NodePressure ...
	I0116 03:10:19.572256    5244 start.go:228] waiting for startup goroutines ...
	I0116 03:10:19.572256    5244 start.go:233] waiting for cluster config update ...
	I0116 03:10:19.572256    5244 start.go:242] writing updated cluster config ...
	I0116 03:10:19.587596    5244 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:10:19.587596    5244 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json ...
	I0116 03:10:19.591020    5244 out.go:177] * Starting worker node multinode-853900-m02 in cluster multinode-853900
	I0116 03:10:19.591020    5244 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0116 03:10:19.591020    5244 cache.go:56] Caching tarball of preloaded images
	I0116 03:10:19.592161    5244 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0116 03:10:19.592203    5244 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0116 03:10:19.592203    5244 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json ...
	I0116 03:10:19.595368    5244 start.go:365] acquiring machines lock for multinode-853900-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:10:19.595563    5244 start.go:369] acquired machines lock for "multinode-853900-m02" in 194.3µs
	I0116 03:10:19.595752    5244 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:10:19.595809    5244 fix.go:54] fixHost starting: m02
	I0116 03:10:19.596332    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:10:21.723959    5244 main.go:141] libmachine: [stdout =====>] : Off
	
	I0116 03:10:21.724144    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:21.724253    5244 fix.go:102] recreateIfNeeded on multinode-853900-m02: state=Stopped err=<nil>
	W0116 03:10:21.724253    5244 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:10:21.725246    5244 out.go:177] * Restarting existing hyperv VM for "multinode-853900-m02" ...
	I0116 03:10:21.725918    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-853900-m02
	I0116 03:10:24.631539    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:10:24.631539    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:24.631539    5244 main.go:141] libmachine: Waiting for host to start...
	I0116 03:10:24.631783    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:10:26.983725    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:10:26.983725    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:26.983725    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:10:29.508994    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:10:29.509162    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:30.510457    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:10:32.754492    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:10:32.754492    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:32.754590    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:10:35.349008    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:10:35.349178    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:36.363365    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:10:38.528359    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:10:38.528423    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:38.528482    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:10:41.053893    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:10:41.053893    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:42.068696    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:10:44.278694    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:10:44.279062    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:44.279139    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:10:46.791386    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:10:46.791496    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:47.798494    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:10:50.025559    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:10:50.025734    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:50.026101    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:10:52.554482    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:10:52.554482    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:52.557223    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:10:54.671640    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:10:54.671640    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:54.671773    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:10:57.254894    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:10:57.254894    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:57.255119    5244 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json ...
	I0116 03:10:57.258043    5244 machine.go:88] provisioning docker machine ...
	I0116 03:10:57.258043    5244 buildroot.go:166] provisioning hostname "multinode-853900-m02"
	I0116 03:10:57.258043    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:10:59.454146    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:10:59.454146    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:10:59.454241    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:11:02.017412    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:11:02.017412    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:02.024033    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:11:02.024706    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.77 22 <nil> <nil>}
	I0116 03:11:02.024706    5244 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-853900-m02 && echo "multinode-853900-m02" | sudo tee /etc/hostname
	I0116 03:11:02.200699    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-853900-m02
	
	I0116 03:11:02.200699    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:11:04.340364    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:11:04.340364    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:04.340364    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:11:06.866186    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:11:06.866186    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:06.872659    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:11:06.872737    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.77 22 <nil> <nil>}
	I0116 03:11:06.872737    5244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-853900-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-853900-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-853900-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:11:07.043796    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:11:07.043796    5244 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0116 03:11:07.043796    5244 buildroot.go:174] setting up certificates
	I0116 03:11:07.043796    5244 provision.go:83] configureAuth start
	I0116 03:11:07.043796    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:11:09.157989    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:11:09.157989    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:09.157989    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:11:11.670948    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:11:11.670948    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:11.670948    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:11:13.851235    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:11:13.851306    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:13.851306    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:11:16.378428    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:11:16.378428    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:16.378521    5244 provision.go:138] copyHostCerts
	I0116 03:11:16.378780    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0116 03:11:16.378864    5244 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0116 03:11:16.378864    5244 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0116 03:11:16.379639    5244 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0116 03:11:16.381061    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0116 03:11:16.381384    5244 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0116 03:11:16.381435    5244 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0116 03:11:16.381528    5244 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0116 03:11:16.382942    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0116 03:11:16.382942    5244 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0116 03:11:16.382942    5244 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0116 03:11:16.383778    5244 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0116 03:11:16.384893    5244 provision.go:112] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-853900-m02 san=[172.27.125.77 172.27.125.77 localhost 127.0.0.1 minikube multinode-853900-m02]
	I0116 03:11:16.504536    5244 provision.go:172] copyRemoteCerts
	I0116 03:11:16.518460    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:11:16.518460    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:11:18.639147    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:11:18.639369    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:18.639559    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:11:21.139974    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:11:21.140171    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:21.140396    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\id_rsa Username:docker}
	I0116 03:11:21.250962    5244 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7324708s)
	I0116 03:11:21.250962    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0116 03:11:21.251866    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:11:21.289284    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0116 03:11:21.289284    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 03:11:21.330885    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0116 03:11:21.331466    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0116 03:11:21.370052    5244 provision.go:86] duration metric: configureAuth took 14.3261614s
	I0116 03:11:21.370161    5244 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:11:21.370937    5244 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:11:21.371066    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:11:23.473018    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:11:23.473199    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:23.473482    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:11:26.049123    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:11:26.049123    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:26.062360    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:11:26.063183    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.77 22 <nil> <nil>}
	I0116 03:11:26.063183    5244 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0116 03:11:26.214863    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0116 03:11:26.214863    5244 buildroot.go:70] root file system type: tmpfs
	I0116 03:11:26.214863    5244 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0116 03:11:26.214863    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:11:28.340698    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:11:28.340853    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:28.340867    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:11:30.821028    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:11:30.821241    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:30.827603    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:11:30.828180    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.77 22 <nil> <nil>}
	I0116 03:11:30.828736    5244 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.125.182"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0116 03:11:31.004738    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.125.182
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0116 03:11:31.004820    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:11:33.167684    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:11:33.167684    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:33.167684    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:11:35.747006    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:11:35.747006    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:35.756458    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:11:35.757395    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.77 22 <nil> <nil>}
	I0116 03:11:35.757544    5244 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0116 03:11:36.891912    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0116 03:11:36.891912    5244 machine.go:91] provisioned docker machine in 39.6336081s
	I0116 03:11:36.891912    5244 start.go:300] post-start starting for "multinode-853900-m02" (driver="hyperv")
	I0116 03:11:36.891912    5244 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:11:36.908232    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:11:36.908232    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:11:39.013212    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:11:39.013291    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:39.013291    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:11:41.574575    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:11:41.574650    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:41.574650    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\id_rsa Username:docker}
	I0116 03:11:41.684831    5244 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7765106s)
	I0116 03:11:41.699006    5244 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:11:41.704125    5244 command_runner.go:130] > NAME=Buildroot
	I0116 03:11:41.704125    5244 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 03:11:41.704125    5244 command_runner.go:130] > ID=buildroot
	I0116 03:11:41.704125    5244 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 03:11:41.704125    5244 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 03:11:41.704125    5244 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:11:41.704125    5244 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0116 03:11:41.705132    5244 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0116 03:11:41.706402    5244 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> 135082.pem in /etc/ssl/certs
	I0116 03:11:41.706453    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> /etc/ssl/certs/135082.pem
	I0116 03:11:41.721744    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:11:41.736784    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem --> /etc/ssl/certs/135082.pem (1708 bytes)
	I0116 03:11:41.774185    5244 start.go:303] post-start completed in 4.8822401s
	I0116 03:11:41.774185    5244 fix.go:56] fixHost completed within 1m22.1778899s
	I0116 03:11:41.774185    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:11:43.974757    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:11:43.974757    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:43.974881    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:11:46.515628    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:11:46.515628    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:46.522042    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:11:46.522949    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.77 22 <nil> <nil>}
	I0116 03:11:46.522949    5244 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:11:46.680505    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374706.679245941
	
	I0116 03:11:46.680505    5244 fix.go:206] guest clock: 1705374706.679245941
	I0116 03:11:46.680505    5244 fix.go:219] Guest: 2024-01-16 03:11:46.679245941 +0000 UTC Remote: 2024-01-16 03:11:41.774185 +0000 UTC m=+226.254586501 (delta=4.905060941s)
	I0116 03:11:46.680505    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:11:48.787829    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:11:48.787829    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:48.788052    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:11:51.322742    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:11:51.323043    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:51.328900    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:11:51.328900    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.77 22 <nil> <nil>}
	I0116 03:11:51.328900    5244 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705374706
	I0116 03:11:51.495758    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan 16 03:11:46 UTC 2024
	
	I0116 03:11:51.495838    5244 fix.go:226] clock set: Tue Jan 16 03:11:46 UTC 2024
	 (err=<nil>)
	I0116 03:11:51.495838    5244 start.go:83] releasing machines lock for "multinode-853900-m02", held for 1m31.8996068s
	I0116 03:11:51.496038    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:11:53.640196    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:11:53.640196    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:53.640300    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:11:56.199512    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:11:56.199586    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:56.200969    5244 out.go:177] * Found network options:
	I0116 03:11:56.202148    5244 out.go:177]   - NO_PROXY=172.27.125.182
	W0116 03:11:56.202824    5244 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 03:11:56.203642    5244 out.go:177]   - NO_PROXY=172.27.125.182
	W0116 03:11:56.204384    5244 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 03:11:56.205875    5244 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 03:11:56.209053    5244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:11:56.209291    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:11:56.220925    5244 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 03:11:56.220925    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:11:58.393541    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:11:58.393541    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:58.393541    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:11:58.393541    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:11:58.393798    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:11:58.393798    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:12:01.043584    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:12:01.043692    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:01.043692    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\id_rsa Username:docker}
	I0116 03:12:01.065507    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.77
	
	I0116 03:12:01.065507    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:01.066492    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.77 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\id_rsa Username:docker}
	I0116 03:12:01.155002    5244 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0116 03:12:01.155897    5244 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9349036s)
	W0116 03:12:01.156002    5244 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:12:01.170668    5244 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:12:01.253964    5244 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 03:12:01.254939    5244 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0116 03:12:01.254939    5244 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:12:01.255028    5244 start.go:475] detecting cgroup driver to use...
	I0116 03:12:01.255217    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:12:01.255300    5244 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.0462135s)
	I0116 03:12:01.290404    5244 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0116 03:12:01.305586    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0116 03:12:01.339871    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 03:12:01.357909    5244 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0116 03:12:01.374842    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0116 03:12:01.409779    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 03:12:01.439931    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 03:12:01.473436    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 03:12:01.503922    5244 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:12:01.532974    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 03:12:01.563560    5244 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:12:01.580617    5244 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 03:12:01.594484    5244 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:12:01.623451    5244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:12:01.799449    5244 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 03:12:01.826401    5244 start.go:475] detecting cgroup driver to use...
	I0116 03:12:01.840143    5244 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0116 03:12:01.860608    5244 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0116 03:12:01.860684    5244 command_runner.go:130] > [Unit]
	I0116 03:12:01.860684    5244 command_runner.go:130] > Description=Docker Application Container Engine
	I0116 03:12:01.860684    5244 command_runner.go:130] > Documentation=https://docs.docker.com
	I0116 03:12:01.860684    5244 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0116 03:12:01.860684    5244 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0116 03:12:01.860684    5244 command_runner.go:130] > StartLimitBurst=3
	I0116 03:12:01.860684    5244 command_runner.go:130] > StartLimitIntervalSec=60
	I0116 03:12:01.860684    5244 command_runner.go:130] > [Service]
	I0116 03:12:01.860684    5244 command_runner.go:130] > Type=notify
	I0116 03:12:01.860684    5244 command_runner.go:130] > Restart=on-failure
	I0116 03:12:01.860684    5244 command_runner.go:130] > Environment=NO_PROXY=172.27.125.182
	I0116 03:12:01.860684    5244 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0116 03:12:01.860684    5244 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0116 03:12:01.860684    5244 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0116 03:12:01.860684    5244 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0116 03:12:01.860684    5244 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0116 03:12:01.860684    5244 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0116 03:12:01.860684    5244 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0116 03:12:01.860684    5244 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0116 03:12:01.860684    5244 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0116 03:12:01.860684    5244 command_runner.go:130] > ExecStart=
	I0116 03:12:01.860684    5244 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0116 03:12:01.860684    5244 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0116 03:12:01.860684    5244 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0116 03:12:01.860684    5244 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0116 03:12:01.860684    5244 command_runner.go:130] > LimitNOFILE=infinity
	I0116 03:12:01.860684    5244 command_runner.go:130] > LimitNPROC=infinity
	I0116 03:12:01.860684    5244 command_runner.go:130] > LimitCORE=infinity
	I0116 03:12:01.860684    5244 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0116 03:12:01.860684    5244 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0116 03:12:01.860684    5244 command_runner.go:130] > TasksMax=infinity
	I0116 03:12:01.860684    5244 command_runner.go:130] > TimeoutStartSec=0
	I0116 03:12:01.860684    5244 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0116 03:12:01.860684    5244 command_runner.go:130] > Delegate=yes
	I0116 03:12:01.860684    5244 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0116 03:12:01.860684    5244 command_runner.go:130] > KillMode=process
	I0116 03:12:01.860684    5244 command_runner.go:130] > [Install]
	I0116 03:12:01.860684    5244 command_runner.go:130] > WantedBy=multi-user.target
	I0116 03:12:01.876569    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:12:01.912583    5244 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:12:01.953942    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:12:01.984936    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 03:12:02.019508    5244 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 03:12:02.074235    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 03:12:02.095242    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:12:02.121238    5244 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0116 03:12:02.135342    5244 ssh_runner.go:195] Run: which cri-dockerd
	I0116 03:12:02.140649    5244 command_runner.go:130] > /usr/bin/cri-dockerd
	I0116 03:12:02.154932    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0116 03:12:02.176787    5244 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0116 03:12:02.215138    5244 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0116 03:12:02.383499    5244 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0116 03:12:02.527036    5244 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0116 03:12:02.527158    5244 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0116 03:12:02.570020    5244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:12:02.730040    5244 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0116 03:12:04.276476    5244 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5464254s)
	I0116 03:12:04.291305    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0116 03:12:04.323751    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0116 03:12:04.355821    5244 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0116 03:12:04.533402    5244 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0116 03:12:04.714120    5244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:12:04.869646    5244 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0116 03:12:04.906444    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0116 03:12:04.940509    5244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:12:05.112191    5244 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0116 03:12:05.215248    5244 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0116 03:12:05.227820    5244 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0116 03:12:05.235474    5244 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0116 03:12:05.235474    5244 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 03:12:05.235474    5244 command_runner.go:130] > Device: 16h/22d	Inode: 864         Links: 1
	I0116 03:12:05.235810    5244 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0116 03:12:05.235810    5244 command_runner.go:130] > Access: 2024-01-16 03:12:05.132913115 +0000
	I0116 03:12:05.235810    5244 command_runner.go:130] > Modify: 2024-01-16 03:12:05.132913115 +0000
	I0116 03:12:05.235810    5244 command_runner.go:130] > Change: 2024-01-16 03:12:05.137913115 +0000
	I0116 03:12:05.235810    5244 command_runner.go:130] >  Birth: -
	I0116 03:12:05.235810    5244 start.go:543] Will wait 60s for crictl version
	I0116 03:12:05.247807    5244 ssh_runner.go:195] Run: which crictl
	I0116 03:12:05.253215    5244 command_runner.go:130] > /usr/bin/crictl
	I0116 03:12:05.266265    5244 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:12:05.339577    5244 command_runner.go:130] > Version:  0.1.0
	I0116 03:12:05.339577    5244 command_runner.go:130] > RuntimeName:  docker
	I0116 03:12:05.339577    5244 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0116 03:12:05.339577    5244 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 03:12:05.339577    5244 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0116 03:12:05.351273    5244 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0116 03:12:05.383871    5244 command_runner.go:130] > 24.0.7
	I0116 03:12:05.396214    5244 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0116 03:12:05.429546    5244 command_runner.go:130] > 24.0.7
	I0116 03:12:05.430477    5244 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0116 03:12:05.431323    5244 out.go:177]   - env NO_PROXY=172.27.125.182
	I0116 03:12:05.431907    5244 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0116 03:12:05.435447    5244 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0116 03:12:05.435447    5244 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0116 03:12:05.435447    5244 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0116 03:12:05.435447    5244 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:4e:7e Flags:up|broadcast|multicast|running}
	I0116 03:12:05.438444    5244 ip.go:210] interface addr: fe80::d699:fcba:3e2b:1549/64
	I0116 03:12:05.438444    5244 ip.go:210] interface addr: 172.27.112.1/20
	I0116 03:12:05.451538    5244 ssh_runner.go:195] Run: grep 172.27.112.1	host.minikube.internal$ /etc/hosts
	I0116 03:12:05.457688    5244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:12:05.474458    5244 certs.go:56] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900 for IP: 172.27.125.77
	I0116 03:12:05.475573    5244 certs.go:190] acquiring lock for shared ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:12:05.476231    5244 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0116 03:12:05.476832    5244 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0116 03:12:05.477083    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 03:12:05.477370    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0116 03:12:05.477647    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 03:12:05.477846    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 03:12:05.478044    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem (1338 bytes)
	W0116 03:12:05.478674    5244 certs.go:433] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508_empty.pem, impossibly tiny 0 bytes
	I0116 03:12:05.478869    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0116 03:12:05.479252    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0116 03:12:05.479599    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0116 03:12:05.479599    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0116 03:12:05.480227    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem (1708 bytes)
	I0116 03:12:05.480227    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> /usr/share/ca-certificates/135082.pem
	I0116 03:12:05.480769    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:05.481037    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem -> /usr/share/ca-certificates/13508.pem
	I0116 03:12:05.482048    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:12:05.520512    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 03:12:05.558227    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:12:05.597093    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:12:05.635628    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem --> /usr/share/ca-certificates/135082.pem (1708 bytes)
	I0116 03:12:05.673393    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:12:05.710252    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem --> /usr/share/ca-certificates/13508.pem (1338 bytes)
	I0116 03:12:05.764431    5244 ssh_runner.go:195] Run: openssl version
	I0116 03:12:05.771626    5244 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 03:12:05.785060    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:12:05.813063    5244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:05.820091    5244 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 01:40 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:05.820187    5244 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 01:40 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:05.835982    5244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:12:05.843312    5244 command_runner.go:130] > b5213941
	I0116 03:12:05.858892    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:12:05.892760    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13508.pem && ln -fs /usr/share/ca-certificates/13508.pem /etc/ssl/certs/13508.pem"
	I0116 03:12:05.922528    5244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13508.pem
	I0116 03:12:05.929191    5244 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 01:53 /usr/share/ca-certificates/13508.pem
	I0116 03:12:05.929191    5244 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 01:53 /usr/share/ca-certificates/13508.pem
	I0116 03:12:05.943356    5244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13508.pem
	I0116 03:12:05.951744    5244 command_runner.go:130] > 51391683
	I0116 03:12:05.964899    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13508.pem /etc/ssl/certs/51391683.0"
	I0116 03:12:05.997637    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135082.pem && ln -fs /usr/share/ca-certificates/135082.pem /etc/ssl/certs/135082.pem"
	I0116 03:12:06.028826    5244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135082.pem
	I0116 03:12:06.034888    5244 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 01:53 /usr/share/ca-certificates/135082.pem
	I0116 03:12:06.035421    5244 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 01:53 /usr/share/ca-certificates/135082.pem
	I0116 03:12:06.046546    5244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135082.pem
	I0116 03:12:06.056199    5244 command_runner.go:130] > 3ec20f2e
	I0116 03:12:06.072818    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/135082.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:12:06.102956    5244 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:12:06.108498    5244 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:12:06.108498    5244 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:12:06.118721    5244 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0116 03:12:06.155008    5244 command_runner.go:130] > cgroupfs
	I0116 03:12:06.156294    5244 cni.go:84] Creating CNI manager for ""
	I0116 03:12:06.156294    5244 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:12:06.156394    5244 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:12:06.156449    5244 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.125.77 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-853900 NodeName:multinode-853900-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.125.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.125.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:12:06.156735    5244 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.125.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-853900-m02"
	  kubeletExtraArgs:
	    node-ip: 172.27.125.77
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.125.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:12:06.156782    5244 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-853900-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.125.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:12:06.170720    5244 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:12:06.194590    5244 command_runner.go:130] > kubeadm
	I0116 03:12:06.194590    5244 command_runner.go:130] > kubectl
	I0116 03:12:06.194590    5244 command_runner.go:130] > kubelet
	I0116 03:12:06.194771    5244 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:12:06.208540    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 03:12:06.225712    5244 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 03:12:06.254538    5244 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:12:06.299637    5244 ssh_runner.go:195] Run: grep 172.27.125.182	control-plane.minikube.internal$ /etc/hosts
	I0116 03:12:06.305138    5244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.125.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:12:06.322547    5244 host.go:66] Checking if "multinode-853900" exists ...
	I0116 03:12:06.322956    5244 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:12:06.322956    5244 start.go:304] JoinCluster: &{Name:multinode-853900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.125.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.125.77 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.116.8 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingres
s:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:12:06.323611    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 03:12:06.323611    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:12:08.482198    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:12:08.482198    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:08.482282    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:12:11.056546    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:12:11.056607    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:11.056832    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 03:12:11.260098    5244 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token bizpab.xucswffo2f83twy5 --discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 
	I0116 03:12:11.260098    5244 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9364541s)
	I0116 03:12:11.260098    5244 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.27.125.77 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0116 03:12:11.260098    5244 host.go:66] Checking if "multinode-853900" exists ...
	I0116 03:12:11.282095    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-853900-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0116 03:12:11.282095    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:12:13.451687    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:12:13.451849    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:13.451849    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:12:16.092697    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:12:16.092780    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:16.092967    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 03:12:16.286925    5244 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0116 03:12:16.367231    5244 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-6s9wr, kube-system/kube-proxy-h977r
	I0116 03:12:19.390174    5244 command_runner.go:130] > node/multinode-853900-m02 cordoned
	I0116 03:12:19.390630    5244 command_runner.go:130] > pod "busybox-5bc68d56bd-9t8fh" has DeletionTimestamp older than 1 seconds, skipping
	I0116 03:12:19.390630    5244 command_runner.go:130] > node/multinode-853900-m02 drained
	I0116 03:12:19.390742    5244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-853900-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (8.1085938s)
	I0116 03:12:19.390814    5244 node.go:108] successfully drained node "m02"
	I0116 03:12:19.391696    5244 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:12:19.392125    5244 kapi.go:59] client config for multinode-853900: &rest.Config{Host:"https://172.27.125.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x270c520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:12:19.393616    5244 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0116 03:12:19.393735    5244 round_trippers.go:463] DELETE https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:19.393735    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:19.393795    5244 round_trippers.go:473]     Content-Type: application/json
	I0116 03:12:19.393795    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:19.393795    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:19.411705    5244 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0116 03:12:19.411705    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:19.411705    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:19.411705    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:19.411705    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:19.412514    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:19.412539    5244 round_trippers.go:580]     Content-Length: 171
	I0116 03:12:19.412539    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:19 GMT
	I0116 03:12:19.412610    5244 round_trippers.go:580]     Audit-Id: 68de8457-4909-4bc8-8f0c-fbeb7cd065bb
	I0116 03:12:19.412610    5244 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-853900-m02","kind":"nodes","uid":"086e89e4-36c0-4f4c-8020-66e7d9fc7e84"}}
	I0116 03:12:19.412708    5244 node.go:124] successfully deleted node "m02"
	I0116 03:12:19.412708    5244 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.27.125.77 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0116 03:12:19.412767    5244 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.27.125.77 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0116 03:12:19.412819    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bizpab.xucswffo2f83twy5 --discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-853900-m02"
	I0116 03:12:19.645416    5244 command_runner.go:130] ! W0116 03:12:19.646724    1358 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0116 03:12:20.205091    5244 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:12:22.009015    5244 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 03:12:22.009084    5244 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 03:12:22.009084    5244 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 03:12:22.009140    5244 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:12:22.009140    5244 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:12:22.009140    5244 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 03:12:22.009140    5244 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 03:12:22.009140    5244 command_runner.go:130] > This node has joined the cluster:
	I0116 03:12:22.009214    5244 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 03:12:22.009214    5244 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 03:12:22.009214    5244 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 03:12:22.009287    5244 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bizpab.xucswffo2f83twy5 --discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-853900-m02": (2.5963775s)
	I0116 03:12:22.009338    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 03:12:22.306960    5244 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0116 03:12:22.570841    5244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=multinode-853900 minikube.k8s.io/updated_at=2024_01_16T03_12_22_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:12:22.732983    5244 command_runner.go:130] > node/multinode-853900-m02 labeled
	I0116 03:12:22.732983    5244 command_runner.go:130] > node/multinode-853900-m03 labeled
	I0116 03:12:22.733135    5244 start.go:306] JoinCluster complete in 16.4100712s
	I0116 03:12:22.733285    5244 cni.go:84] Creating CNI manager for ""
	I0116 03:12:22.733285    5244 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:12:22.748651    5244 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 03:12:22.756806    5244 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 03:12:22.756806    5244 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 03:12:22.756906    5244 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 03:12:22.756906    5244 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:12:22.756906    5244 command_runner.go:130] > Access: 2024-01-16 03:08:31.256896000 +0000
	I0116 03:12:22.756906    5244 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 03:12:22.756906    5244 command_runner.go:130] > Change: 2024-01-16 03:08:20.148000000 +0000
	I0116 03:12:22.756906    5244 command_runner.go:130] >  Birth: -
	I0116 03:12:22.757073    5244 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 03:12:22.757073    5244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 03:12:22.800432    5244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 03:12:23.234890    5244 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:12:23.234986    5244 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:12:23.234986    5244 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 03:12:23.234986    5244 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 03:12:23.236049    5244 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:12:23.236596    5244 kapi.go:59] client config for multinode-853900: &rest.Config{Host:"https://172.27.125.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x270c520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:12:23.237585    5244 round_trippers.go:463] GET https://172.27.125.182:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 03:12:23.237585    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:23.237585    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:23.237585    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:23.240417    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:12:23.240417    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:23.240417    5244 round_trippers.go:580]     Content-Length: 292
	I0116 03:12:23.241246    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:23 GMT
	I0116 03:12:23.241246    5244 round_trippers.go:580]     Audit-Id: 9720006d-39ac-4bc1-8367-ccef15183d47
	I0116 03:12:23.241246    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:23.241246    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:23.241246    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:23.241309    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:23.241382    5244 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bb9e4be9-a821-417a-b943-b930d6cec07c","resourceVersion":"1765","creationTimestamp":"2024-01-16T02:48:09Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 03:12:23.241502    5244 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-853900" context rescaled to 1 replicas
	I0116 03:12:23.241502    5244 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.27.125.77 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0116 03:12:23.242491    5244 out.go:177] * Verifying Kubernetes components...
	I0116 03:12:23.258305    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:12:23.277355    5244 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:12:23.278045    5244 kapi.go:59] client config for multinode-853900: &rest.Config{Host:"https://172.27.125.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x270c520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:12:23.278831    5244 node_ready.go:35] waiting up to 6m0s for node "multinode-853900-m02" to be "Ready" ...
	I0116 03:12:23.278988    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:23.278988    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:23.279045    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:23.279045    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:23.281225    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:12:23.281225    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:23.282230    5244 round_trippers.go:580]     Audit-Id: 5660b32d-f5af-4286-9e48-032fccf69429
	I0116 03:12:23.282230    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:23.282230    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:23.282230    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:23.282230    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:23.282230    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:23 GMT
	I0116 03:12:23.282230    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1941","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3781 chars]
	I0116 03:12:23.788108    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:23.788108    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:23.788108    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:23.788108    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:23.792865    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:12:23.792918    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:23.792918    5244 round_trippers.go:580]     Audit-Id: 7f642170-9c0f-4852-b491-b2304b7818ba
	I0116 03:12:23.793004    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:23.793004    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:23.793004    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:23.793004    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:23.793004    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:23 GMT
	I0116 03:12:23.793004    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1941","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3781 chars]
	I0116 03:12:24.285533    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:24.285533    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:24.285533    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:24.285533    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:24.290603    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:12:24.291289    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:24.291289    5244 round_trippers.go:580]     Audit-Id: f94a86d3-fb71-45f2-a060-f039f6c2644b
	I0116 03:12:24.291289    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:24.291289    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:24.291289    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:24.291382    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:24.291382    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:24 GMT
	I0116 03:12:24.291556    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1948","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:12:24.789791    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:24.789791    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:24.789791    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:24.789791    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:24.793471    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:12:24.793471    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:24.793471    5244 round_trippers.go:580]     Audit-Id: 8abf6346-f9b0-4d3a-b8c5-6e0db80b96ac
	I0116 03:12:24.793471    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:24.793471    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:24.793471    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:24.793471    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:24.793471    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:24 GMT
	I0116 03:12:24.794490    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1948","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:12:25.280706    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:25.280811    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:25.280811    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:25.280811    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:25.285036    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:12:25.285912    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:25.285912    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:25.285912    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:25.285912    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:25.285912    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:25 GMT
	I0116 03:12:25.285912    5244 round_trippers.go:580]     Audit-Id: 393d8e9f-6f00-497c-bea7-34735b2a066b
	I0116 03:12:25.285912    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:25.285912    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1948","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:12:25.286792    5244 node_ready.go:58] node "multinode-853900-m02" has status "Ready":"False"
	I0116 03:12:25.779523    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:25.779651    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:25.779651    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:25.779651    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:25.784099    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:12:25.784099    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:25.784099    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:25.784099    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:25.784224    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:25.784224    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:25.784224    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:25 GMT
	I0116 03:12:25.784224    5244 round_trippers.go:580]     Audit-Id: c3a7d651-d5f0-42ea-83fa-da91536468aa
	I0116 03:12:25.784450    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1948","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:12:26.281427    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:26.281483    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:26.281483    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:26.281577    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:26.286956    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:12:26.287631    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:26.287689    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:26.287689    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:26.287689    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:26.287689    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:26.287689    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:26 GMT
	I0116 03:12:26.287689    5244 round_trippers.go:580]     Audit-Id: fec66c07-239e-425b-ae7b-8e5db1b79525
	I0116 03:12:26.287689    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1948","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:12:26.780695    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:26.780759    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:26.780829    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:26.780829    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:26.788145    5244 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 03:12:26.788234    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:26.788234    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:26.788234    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:26 GMT
	I0116 03:12:26.788234    5244 round_trippers.go:580]     Audit-Id: c07cd6b0-59c7-4a53-996f-f171bcbd1746
	I0116 03:12:26.788234    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:26.788234    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:26.788234    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:26.788399    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1948","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:12:27.288888    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:27.288962    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:27.288962    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:27.288962    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:27.293623    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:12:27.293623    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:27.293623    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:27 GMT
	I0116 03:12:27.293623    5244 round_trippers.go:580]     Audit-Id: 3ee798a9-8188-4f55-9b6a-4c02115501cd
	I0116 03:12:27.293838    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:27.293838    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:27.293838    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:27.293838    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:27.294016    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1948","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:12:27.294684    5244 node_ready.go:58] node "multinode-853900-m02" has status "Ready":"False"
	I0116 03:12:27.790883    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:27.791086    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:27.791086    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:27.791086    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:27.795453    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:12:27.795764    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:27.795898    5244 round_trippers.go:580]     Audit-Id: 2c6a3ab0-de66-4149-82cd-a4e5edfa9593
	I0116 03:12:27.795898    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:27.795898    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:27.795898    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:27.795898    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:27.795898    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:27 GMT
	I0116 03:12:27.796190    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1948","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:12:28.292777    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:28.292777    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:28.292777    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:28.292777    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:28.298955    5244 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 03:12:28.298955    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:28.298955    5244 round_trippers.go:580]     Audit-Id: 4c57e612-e73c-4d56-ace4-bab348243617
	I0116 03:12:28.298955    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:28.298955    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:28.298955    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:28.298955    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:28.298955    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:28 GMT
	I0116 03:12:28.298955    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1948","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:12:28.793935    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:28.793935    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:28.793935    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:28.793935    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:28.797511    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:12:28.797511    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:28.797511    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:28.797511    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:28 GMT
	I0116 03:12:28.797932    5244 round_trippers.go:580]     Audit-Id: 87bd4063-3a35-4c2d-b143-d10ba981cf1d
	I0116 03:12:28.797932    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:28.797932    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:28.797932    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:28.798281    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1948","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:12:29.294418    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:29.294418    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:29.294418    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:29.294418    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:29.299004    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:12:29.299004    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:29.299004    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:29.299004    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:29.299004    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:29 GMT
	I0116 03:12:29.299004    5244 round_trippers.go:580]     Audit-Id: 8d59db44-fa66-46e3-b4a2-90cdac986395
	I0116 03:12:29.299004    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:29.299004    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:29.299004    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1948","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:12:29.300104    5244 node_ready.go:58] node "multinode-853900-m02" has status "Ready":"False"
	I0116 03:12:29.780640    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:29.780640    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:29.780764    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:29.780764    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:29.784724    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:12:29.785458    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:29.785458    5244 round_trippers.go:580]     Audit-Id: 69c35390-1189-4add-8c34-026194796429
	I0116 03:12:29.785458    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:29.785458    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:29.785458    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:29.785534    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:29.785534    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:29 GMT
	I0116 03:12:29.785772    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1948","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:12:30.283225    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:30.283295    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:30.283295    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:30.283295    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:30.291516    5244 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0116 03:12:30.291516    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:30.291516    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:30.291516    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:30.291516    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:30 GMT
	I0116 03:12:30.291516    5244 round_trippers.go:580]     Audit-Id: 02d1a5a1-133d-4167-aa89-98f7dc9993a9
	I0116 03:12:30.291516    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:30.291516    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:30.291516    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1948","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:12:30.784010    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:30.784083    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:30.784182    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:30.784182    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:30.787882    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:12:30.788604    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:30.788604    5244 round_trippers.go:580]     Audit-Id: 536968aa-ead3-471f-b369-20cdde2cb030
	I0116 03:12:30.788604    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:30.788604    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:30.788604    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:30.788604    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:30.788604    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:30 GMT
	I0116 03:12:30.788892    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1963","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3925 chars]
	I0116 03:12:30.788946    5244 node_ready.go:49] node "multinode-853900-m02" has status "Ready":"True"
	I0116 03:12:30.788946    5244 node_ready.go:38] duration metric: took 7.5100131s waiting for node "multinode-853900-m02" to be "Ready" ...
	I0116 03:12:30.788946    5244 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:12:30.789479    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods
	I0116 03:12:30.789479    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:30.789589    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:30.789589    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:30.794519    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:12:30.794519    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:30.794519    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:30 GMT
	I0116 03:12:30.795110    5244 round_trippers.go:580]     Audit-Id: 698a8c5d-eeac-4c3b-a5f3-2b327c32f168
	I0116 03:12:30.795110    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:30.795110    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:30.795110    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:30.795110    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:30.798353    5244 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1965"},"items":[{"metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"1761","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83409 chars]
	I0116 03:12:30.803038    5244 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:30.803368    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-62jpz
	I0116 03:12:30.803368    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:30.803368    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:30.803368    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:30.806178    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:12:30.806178    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:30.806178    5244 round_trippers.go:580]     Audit-Id: 3eea9183-b356-4a9b-b0b6-2dc847a2a698
	I0116 03:12:30.806178    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:30.806178    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:30.806178    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:30.806178    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:30.806178    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:30 GMT
	I0116 03:12:30.807378    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"1761","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0116 03:12:30.808591    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:12:30.808591    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:30.808591    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:30.808591    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:30.814365    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:12:30.814365    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:30.814365    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:30.814365    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:30.814365    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:30 GMT
	I0116 03:12:30.814365    5244 round_trippers.go:580]     Audit-Id: 2e29edee-4d5a-4ed8-a4e1-c12ea91515fa
	I0116 03:12:30.814365    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:30.814365    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:30.814917    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0116 03:12:30.815266    5244 pod_ready.go:92] pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace has status "Ready":"True"
	I0116 03:12:30.815266    5244 pod_ready.go:81] duration metric: took 12.1561ms waiting for pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:30.815266    5244 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:30.815795    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-853900
	I0116 03:12:30.815795    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:30.815795    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:30.815907    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:30.818549    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:12:30.818549    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:30.818549    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:30 GMT
	I0116 03:12:30.818549    5244 round_trippers.go:580]     Audit-Id: 3bd580f4-ef12-4675-aef5-a667335aea6b
	I0116 03:12:30.818549    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:30.818549    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:30.818549    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:30.818549    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:30.819544    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-853900","namespace":"kube-system","uid":"0830a000-5e72-4c45-a843-1dd557d188eb","resourceVersion":"1718","creationTimestamp":"2024-01-16T03:09:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.125.182:2379","kubernetes.io/config.hash":"69d98d086aafe436cd9405e0584ec9d9","kubernetes.io/config.mirror":"69d98d086aafe436cd9405e0584ec9d9","kubernetes.io/config.seen":"2024-01-16T03:09:50.494161665Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:09:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0116 03:12:30.819544    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:12:30.819544    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:30.819544    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:30.819544    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:30.823124    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:12:30.823124    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:30.823124    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:30.823124    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:30.823124    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:30 GMT
	I0116 03:12:30.823124    5244 round_trippers.go:580]     Audit-Id: ed2a2d5a-7799-4808-80c4-b394adff0c17
	I0116 03:12:30.823124    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:30.823124    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:30.823124    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0116 03:12:30.823124    5244 pod_ready.go:92] pod "etcd-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 03:12:30.823124    5244 pod_ready.go:81] duration metric: took 7.8576ms waiting for pod "etcd-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:30.823124    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:30.824185    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-853900
	I0116 03:12:30.824185    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:30.824185    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:30.824185    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:30.827131    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:12:30.827131    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:30.827131    5244 round_trippers.go:580]     Audit-Id: 017f1e7e-3f2a-44a7-bbbd-84618719412f
	I0116 03:12:30.827131    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:30.827131    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:30.827131    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:30.827131    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:30.827131    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:30 GMT
	I0116 03:12:30.827507    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-853900","namespace":"kube-system","uid":"cb2bb8c0-e51a-46cf-87f4-5c3ad0287455","resourceVersion":"1722","creationTimestamp":"2024-01-16T03:10:01Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.125.182:8443","kubernetes.io/config.hash":"e8b156384a67a45d4dc14390f3884653","kubernetes.io/config.mirror":"e8b156384a67a45d4dc14390f3884653","kubernetes.io/config.seen":"2024-01-16T03:09:50.494166665Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:10:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0116 03:12:30.827774    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:12:30.827774    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:30.827774    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:30.827774    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:30.831427    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:12:30.831427    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:30.831427    5244 round_trippers.go:580]     Audit-Id: 4d018efc-71cc-42d1-838d-6f038ced2661
	I0116 03:12:30.831427    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:30.831427    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:30.831427    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:30.831427    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:30.831427    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:30 GMT
	I0116 03:12:30.831744    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0116 03:12:30.832370    5244 pod_ready.go:92] pod "kube-apiserver-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 03:12:30.832444    5244 pod_ready.go:81] duration metric: took 9.2465ms waiting for pod "kube-apiserver-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:30.832444    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:30.832444    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-853900
	I0116 03:12:30.832600    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:30.832600    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:30.832600    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:30.835605    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:12:30.835605    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:30.835605    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:30.835605    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:30 GMT
	I0116 03:12:30.835605    5244 round_trippers.go:580]     Audit-Id: 87bef357-57df-4dbc-8c3b-589a515b2e04
	I0116 03:12:30.835830    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:30.835830    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:30.835830    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:30.836183    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-853900","namespace":"kube-system","uid":"5a4d4e86-9836-401a-8d98-1519ff75a0ec","resourceVersion":"1746","creationTimestamp":"2024-01-16T02:48:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f09e1ab837c9ef5b247e4d57afe8993b","kubernetes.io/config.mirror":"f09e1ab837c9ef5b247e4d57afe8993b","kubernetes.io/config.seen":"2024-01-16T02:48:00.146129509Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0116 03:12:30.836597    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:12:30.836597    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:30.836597    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:30.836597    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:30.839185    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:12:30.839185    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:30.839185    5244 round_trippers.go:580]     Audit-Id: b2b8912b-fcdd-4b57-aafe-2c82ac985525
	I0116 03:12:30.839185    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:30.839185    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:30.839969    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:30.839969    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:30.839969    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:30 GMT
	I0116 03:12:30.840202    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0116 03:12:30.840554    5244 pod_ready.go:92] pod "kube-controller-manager-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 03:12:30.840612    5244 pod_ready.go:81] duration metric: took 8.1684ms waiting for pod "kube-controller-manager-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:30.840612    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h977r" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:30.989505    5244 request.go:629] Waited for 148.5109ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h977r
	I0116 03:12:30.989617    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h977r
	I0116 03:12:30.989617    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:30.989617    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:30.989617    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:30.993816    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:12:30.993816    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:30.993816    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:30.993879    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:30 GMT
	I0116 03:12:30.993879    5244 round_trippers.go:580]     Audit-Id: 32f99d12-1c00-43b6-8b56-36c786023692
	I0116 03:12:30.993879    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:30.993879    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:30.993879    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:30.994108    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h977r","generateName":"kube-proxy-","namespace":"kube-system","uid":"5434ef27-d483-46c1-a95d-bd86163ee965","resourceVersion":"1943","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0116 03:12:31.191613    5244 request.go:629] Waited for 196.9709ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:31.191817    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:12:31.191817    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:31.191817    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:31.191817    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:31.197242    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:12:31.197242    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:31.197242    5244 round_trippers.go:580]     Audit-Id: 1a01b43a-a25f-439a-8043-eae2373d1dae
	I0116 03:12:31.197242    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:31.197242    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:31.197242    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:31.197888    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:31.197888    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:31 GMT
	I0116 03:12:31.198042    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"1963","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3925 chars]
	I0116 03:12:31.198574    5244 pod_ready.go:92] pod "kube-proxy-h977r" in "kube-system" namespace has status "Ready":"True"
	I0116 03:12:31.198627    5244 pod_ready.go:81] duration metric: took 358.0128ms waiting for pod "kube-proxy-h977r" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:31.198627    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfglr" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:31.395615    5244 request.go:629] Waited for 196.6176ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfglr
	I0116 03:12:31.395876    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfglr
	I0116 03:12:31.395876    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:31.395876    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:31.395876    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:31.400292    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:12:31.400292    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:31.400292    5244 round_trippers.go:580]     Audit-Id: 0524c796-43db-4db5-a4a7-25ab8b096023
	I0116 03:12:31.400927    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:31.400927    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:31.400927    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:31.400927    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:31.400927    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:31 GMT
	I0116 03:12:31.401307    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfglr","generateName":"kube-proxy-","namespace":"kube-system","uid":"80452c87-583e-40d7-aec9-4c790772a538","resourceVersion":"1815","creationTimestamp":"2024-01-16T02:55:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:55:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5968 chars]
	I0116 03:12:31.597150    5244 request.go:629] Waited for 195.3928ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:12:31.597358    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:12:31.597358    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:31.597358    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:31.597358    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:31.600731    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:12:31.601114    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:31.601114    5244 round_trippers.go:580]     Audit-Id: af96523d-f3fd-49f2-8e7f-484dbd4fce06
	I0116 03:12:31.601114    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:31.601114    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:31.601114    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:31.601187    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:31.601187    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:31 GMT
	I0116 03:12:31.601187    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"99b99df3-ad5b-4c59-a7a0-406b850f5433","resourceVersion":"1942","creationTimestamp":"2024-01-16T03:05:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_12_22_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:05:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 4391 chars]
	I0116 03:12:31.601837    5244 pod_ready.go:97] node "multinode-853900-m03" hosting pod "kube-proxy-rfglr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900-m03" has status "Ready":"Unknown"
	I0116 03:12:31.601837    5244 pod_ready.go:81] duration metric: took 403.2067ms waiting for pod "kube-proxy-rfglr" in "kube-system" namespace to be "Ready" ...
	E0116 03:12:31.601837    5244 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-853900-m03" hosting pod "kube-proxy-rfglr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-853900-m03" has status "Ready":"Unknown"
	I0116 03:12:31.601837    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tpc2g" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:31.798578    5244 request.go:629] Waited for 196.7399ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tpc2g
	I0116 03:12:31.798985    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tpc2g
	I0116 03:12:31.798985    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:31.798985    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:31.798985    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:31.803560    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:12:31.803560    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:31.803560    5244 round_trippers.go:580]     Audit-Id: 3f46637b-a5f6-4e23-acbc-ba993e24ceee
	I0116 03:12:31.803710    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:31.803710    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:31.803710    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:31.803710    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:31.803710    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:31 GMT
	I0116 03:12:31.804346    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tpc2g","generateName":"kube-proxy-","namespace":"kube-system","uid":"0cb279ef-9d3a-4c55-9c57-ce7eede8a052","resourceVersion":"1708","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0116 03:12:31.986791    5244 request.go:629] Waited for 181.2061ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:12:31.987042    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:12:31.987042    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:31.987042    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:31.987042    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:31.991914    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:12:31.991914    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:31.992159    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:31.992159    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:31.992159    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:31.992159    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:31 GMT
	I0116 03:12:31.992159    5244 round_trippers.go:580]     Audit-Id: bd9aa543-20e3-4019-84c7-d12b905a8836
	I0116 03:12:31.992159    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:31.992513    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0116 03:12:31.993075    5244 pod_ready.go:92] pod "kube-proxy-tpc2g" in "kube-system" namespace has status "Ready":"True"
	I0116 03:12:31.993149    5244 pod_ready.go:81] duration metric: took 391.3099ms waiting for pod "kube-proxy-tpc2g" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:31.993149    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:32.189716    5244 request.go:629] Waited for 196.2575ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-853900
	I0116 03:12:32.189884    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-853900
	I0116 03:12:32.189884    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:32.189884    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:32.189884    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:32.198400    5244 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0116 03:12:32.198400    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:32.198400    5244 round_trippers.go:580]     Audit-Id: 2533903b-6974-483b-8dfe-5f6d57d1c63a
	I0116 03:12:32.198400    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:32.198400    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:32.198400    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:32.198400    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:32.198400    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:32 GMT
	I0116 03:12:32.198400    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-853900","namespace":"kube-system","uid":"d75db7e3-c171-428f-9c08-f268ce16da31","resourceVersion":"1723","creationTimestamp":"2024-01-16T02:48:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aff36fe37a6d6fc8d309826a0f54f93d","kubernetes.io/config.mirror":"aff36fe37a6d6fc8d309826a0f54f93d","kubernetes.io/config.seen":"2024-01-16T02:48:09.211494477Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0116 03:12:32.389549    5244 request.go:629] Waited for 189.7423ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:12:32.389873    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:12:32.389873    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:32.390002    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:32.390002    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:32.394307    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:12:32.394307    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:32.394307    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:32.394307    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:32.394307    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:32.394307    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:32 GMT
	I0116 03:12:32.394307    5244 round_trippers.go:580]     Audit-Id: f36858bf-14b0-448c-bd78-9aeaf48e1e06
	I0116 03:12:32.394307    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:32.394307    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0116 03:12:32.395059    5244 pod_ready.go:92] pod "kube-scheduler-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 03:12:32.395147    5244 pod_ready.go:81] duration metric: took 401.9953ms waiting for pod "kube-scheduler-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:12:32.395147    5244 pod_ready.go:38] duration metric: took 1.6061904s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:12:32.395147    5244 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:12:32.409798    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:12:32.436450    5244 system_svc.go:56] duration metric: took 41.3024ms WaitForService to wait for kubelet.
	I0116 03:12:32.436450    5244 kubeadm.go:581] duration metric: took 9.1948876s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:12:32.436450    5244 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:12:32.591934    5244 request.go:629] Waited for 155.175ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes
	I0116 03:12:32.592254    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes
	I0116 03:12:32.592254    5244 round_trippers.go:469] Request Headers:
	I0116 03:12:32.592254    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:12:32.592254    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:12:32.600201    5244 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 03:12:32.600377    5244 round_trippers.go:577] Response Headers:
	I0116 03:12:32.600377    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:12:32.600377    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:12:32 GMT
	I0116 03:12:32.600377    5244 round_trippers.go:580]     Audit-Id: 06723bac-8873-4cc0-8e3c-a46148893b77
	I0116 03:12:32.600377    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:12:32.600377    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:12:32.600446    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:12:32.601195    5244 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1968"},"items":[{"metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15592 chars]
	I0116 03:12:32.602378    5244 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:12:32.602378    5244 node_conditions.go:123] node cpu capacity is 2
	I0116 03:12:32.602378    5244 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:12:32.602378    5244 node_conditions.go:123] node cpu capacity is 2
	I0116 03:12:32.602378    5244 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:12:32.602378    5244 node_conditions.go:123] node cpu capacity is 2
	I0116 03:12:32.602378    5244 node_conditions.go:105] duration metric: took 165.9269ms to run NodePressure ...
	I0116 03:12:32.602378    5244 start.go:228] waiting for startup goroutines ...
	I0116 03:12:32.602378    5244 start.go:242] writing updated cluster config ...
	I0116 03:12:32.618597    5244 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:12:32.618816    5244 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json ...
	I0116 03:12:32.622963    5244 out.go:177] * Starting worker node multinode-853900-m03 in cluster multinode-853900
	I0116 03:12:32.623717    5244 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0116 03:12:32.623800    5244 cache.go:56] Caching tarball of preloaded images
	I0116 03:12:32.624202    5244 preload.go:174] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0116 03:12:32.624296    5244 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0116 03:12:32.624296    5244 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json ...
	I0116 03:12:32.632127    5244 start.go:365] acquiring machines lock for multinode-853900-m03: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0116 03:12:32.633201    5244 start.go:369] acquired machines lock for "multinode-853900-m03" in 1.0732ms
	I0116 03:12:32.633201    5244 start.go:96] Skipping create...Using existing machine configuration
	I0116 03:12:32.633413    5244 fix.go:54] fixHost starting: m03
	I0116 03:12:32.633887    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:12:34.801810    5244 main.go:141] libmachine: [stdout =====>] : Off
	
	I0116 03:12:34.801810    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:34.801810    5244 fix.go:102] recreateIfNeeded on multinode-853900-m03: state=Stopped err=<nil>
	W0116 03:12:34.801810    5244 fix.go:128] unexpected machine state, will restart: <nil>
	I0116 03:12:34.802884    5244 out.go:177] * Restarting existing hyperv VM for "multinode-853900-m03" ...
	I0116 03:12:34.803383    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-853900-m03
	I0116 03:12:37.250828    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:12:37.250828    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:37.250828    5244 main.go:141] libmachine: Waiting for host to start...
	I0116 03:12:37.250909    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:12:39.566274    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:12:39.566274    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:39.566274    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:12:42.102126    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:12:42.102162    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:43.117098    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:12:45.338026    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:12:45.338114    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:45.338114    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:12:47.889468    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:12:47.889644    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:48.904376    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:12:51.097237    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:12:51.097237    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:51.097477    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:12:53.638977    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:12:53.639281    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:54.641603    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:12:56.883189    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:12:56.883189    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:12:56.883296    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:12:59.438625    5244 main.go:141] libmachine: [stdout =====>] : 
	I0116 03:12:59.438625    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:00.443617    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:13:02.652848    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:13:02.652918    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:02.652918    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:13:05.273070    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:13:05.273296    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:05.276379    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:13:07.436606    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:13:07.436606    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:07.436606    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:13:09.948417    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:13:09.948680    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:09.948950    5244 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900\config.json ...
	I0116 03:13:09.951586    5244 machine.go:88] provisioning docker machine ...
	I0116 03:13:09.951586    5244 buildroot.go:166] provisioning hostname "multinode-853900-m03"
	I0116 03:13:09.951586    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:13:12.099526    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:13:12.099723    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:12.099723    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:13:14.640865    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:13:14.641146    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:14.647376    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:14.648192    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.42 22 <nil> <nil>}
	I0116 03:13:14.648192    5244 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-853900-m03 && echo "multinode-853900-m03" | sudo tee /etc/hostname
	I0116 03:13:14.824324    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-853900-m03
	
	I0116 03:13:14.824324    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:13:17.003211    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:13:17.003211    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:17.003211    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:13:19.560567    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:13:19.560567    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:19.566608    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:19.566757    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.42 22 <nil> <nil>}
	I0116 03:13:19.566757    5244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-853900-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-853900-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-853900-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0116 03:13:19.736612    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0116 03:13:19.736612    5244 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0116 03:13:19.736612    5244 buildroot.go:174] setting up certificates
	I0116 03:13:19.736612    5244 provision.go:83] configureAuth start
	I0116 03:13:19.736612    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:13:21.849082    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:13:21.849317    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:21.849408    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:13:24.373263    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:13:24.373263    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:24.373540    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:13:26.522513    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:13:26.522513    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:26.522513    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:13:29.014160    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:13:29.014160    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:29.014160    5244 provision.go:138] copyHostCerts
	I0116 03:13:29.014449    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0116 03:13:29.014449    5244 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0116 03:13:29.014449    5244 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0116 03:13:29.015027    5244 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0116 03:13:29.016252    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0116 03:13:29.016252    5244 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0116 03:13:29.016252    5244 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0116 03:13:29.016965    5244 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1675 bytes)
	I0116 03:13:29.017720    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0116 03:13:29.018337    5244 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0116 03:13:29.018337    5244 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0116 03:13:29.018337    5244 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0116 03:13:29.019874    5244 provision.go:112] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-853900-m03 san=[172.27.125.42 172.27.125.42 localhost 127.0.0.1 minikube multinode-853900-m03]
	I0116 03:13:29.134521    5244 provision.go:172] copyRemoteCerts
	I0116 03:13:29.147550    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0116 03:13:29.147550    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:13:31.232846    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:13:31.232846    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:31.232846    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:13:33.774574    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:13:33.774792    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:33.775026    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m03\id_rsa Username:docker}
	I0116 03:13:33.881815    5244 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7342343s)
	I0116 03:13:33.881815    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0116 03:13:33.881815    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0116 03:13:33.929737    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0116 03:13:33.929737    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0116 03:13:33.969964    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0116 03:13:33.969964    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0116 03:13:34.015249    5244 provision.go:86] duration metric: configureAuth took 14.2785425s
	I0116 03:13:34.015249    5244 buildroot.go:189] setting minikube options for container-runtime
	I0116 03:13:34.016244    5244 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:13:34.016244    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:13:36.154634    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:13:36.154904    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:36.154904    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:13:38.700051    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:13:38.700051    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:38.707096    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:38.707854    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.42 22 <nil> <nil>}
	I0116 03:13:38.707854    5244 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0116 03:13:38.865365    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0116 03:13:38.865365    5244 buildroot.go:70] root file system type: tmpfs
	I0116 03:13:38.865657    5244 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0116 03:13:38.865657    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:13:40.988282    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:13:40.988282    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:40.988388    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:13:43.480717    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:13:43.481100    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:43.486517    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:43.487294    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.42 22 <nil> <nil>}
	I0116 03:13:43.487408    5244 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.125.182"
	Environment="NO_PROXY=172.27.125.182,172.27.125.77"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0116 03:13:43.663968    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.125.182
	Environment=NO_PROXY=172.27.125.182,172.27.125.77
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0116 03:13:43.664593    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:13:45.786005    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:13:45.786005    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:45.786005    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:13:48.304338    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:13:48.304338    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:48.310432    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:48.310980    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.42 22 <nil> <nil>}
	I0116 03:13:48.310980    5244 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0116 03:13:49.425255    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0116 03:13:49.425372    5244 machine.go:91] provisioned docker machine in 39.4734084s
	I0116 03:13:49.425372    5244 start.go:300] post-start starting for "multinode-853900-m03" (driver="hyperv")
	I0116 03:13:49.425372    5244 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0116 03:13:49.440877    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0116 03:13:49.440877    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:13:51.579997    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:13:51.580264    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:51.580320    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:13:54.143448    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:13:54.143448    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:54.143448    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m03\id_rsa Username:docker}
	I0116 03:13:54.257234    5244 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8162499s)
	I0116 03:13:54.271445    5244 ssh_runner.go:195] Run: cat /etc/os-release
	I0116 03:13:54.277642    5244 command_runner.go:130] > NAME=Buildroot
	I0116 03:13:54.277642    5244 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0116 03:13:54.277642    5244 command_runner.go:130] > ID=buildroot
	I0116 03:13:54.277642    5244 command_runner.go:130] > VERSION_ID=2021.02.12
	I0116 03:13:54.277642    5244 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0116 03:13:54.277946    5244 info.go:137] Remote host: Buildroot 2021.02.12
	I0116 03:13:54.278023    5244 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0116 03:13:54.278499    5244 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0116 03:13:54.279797    5244 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> 135082.pem in /etc/ssl/certs
	I0116 03:13:54.279797    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> /etc/ssl/certs/135082.pem
	I0116 03:13:54.297301    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0116 03:13:54.313617    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem --> /etc/ssl/certs/135082.pem (1708 bytes)
	I0116 03:13:54.356838    5244 start.go:303] post-start completed in 4.9314341s
	I0116 03:13:54.356838    5244 fix.go:56] fixHost completed within 1m21.7228858s
	I0116 03:13:54.356838    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:13:56.526360    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:13:56.526691    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:56.526691    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:13:59.132411    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:13:59.132652    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:13:59.138425    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:13:59.139004    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.42 22 <nil> <nil>}
	I0116 03:13:59.139004    5244 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0116 03:13:59.295785    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: 1705374839.296024364
	
	I0116 03:13:59.295885    5244 fix.go:206] guest clock: 1705374839.296024364
	I0116 03:13:59.295885    5244 fix.go:219] Guest: 2024-01-16 03:13:59.296024364 +0000 UTC Remote: 2024-01-16 03:13:54.3568388 +0000 UTC m=+358.836365201 (delta=4.939185564s)
	I0116 03:13:59.295885    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:14:01.409867    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:14:01.410032    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:14:01.410032    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:14:03.987751    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:14:03.987751    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:14:03.992718    5244 main.go:141] libmachine: Using SSH client type: native
	I0116 03:14:03.993477    5244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1326120] 0x1328c60 <nil>  [] 0s} 172.27.125.42 22 <nil> <nil>}
	I0116 03:14:03.993477    5244 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1705374839
	I0116 03:14:04.160857    5244 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue Jan 16 03:13:59 UTC 2024
	
	I0116 03:14:04.160857    5244 fix.go:226] clock set: Tue Jan 16 03:13:59 UTC 2024
	 (err=<nil>)
	I0116 03:14:04.160857    5244 start.go:83] releasing machines lock for "multinode-853900-m03", held for 1m31.5270521s
	I0116 03:14:04.160857    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:14:06.307908    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:14:06.308065    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:14:06.308065    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:14:08.827199    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:14:08.827199    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:14:08.828015    5244 out.go:177] * Found network options:
	I0116 03:14:08.829286    5244 out.go:177]   - NO_PROXY=172.27.125.182,172.27.125.77
	W0116 03:14:08.830004    5244 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 03:14:08.830031    5244 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 03:14:08.830708    5244 out.go:177]   - NO_PROXY=172.27.125.182,172.27.125.77
	W0116 03:14:08.831378    5244 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 03:14:08.831450    5244 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 03:14:08.832570    5244 proxy.go:119] fail to check proxy env: Error ip not in block
	W0116 03:14:08.832570    5244 proxy.go:119] fail to check proxy env: Error ip not in block
	I0116 03:14:08.835626    5244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0116 03:14:08.835726    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:14:08.847728    5244 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0116 03:14:08.847728    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:14:11.038353    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:14:11.038353    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:14:11.038353    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:14:11.038353    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:14:11.038353    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:14:11.038353    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m03 ).networkadapters[0]).ipaddresses[0]
	I0116 03:14:13.733618    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:14:13.733618    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:14:13.733618    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m03\id_rsa Username:docker}
	I0116 03:14:13.754402    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.42
	
	I0116 03:14:13.754402    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:14:13.754402    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.42 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m03\id_rsa Username:docker}
	I0116 03:14:13.849656    5244 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0116 03:14:13.849656    5244 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.001895s)
	W0116 03:14:13.849656    5244 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0116 03:14:13.867703    5244 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0116 03:14:13.951349    5244 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0116 03:14:13.951402    5244 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0116 03:14:13.951523    5244 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0116 03:14:13.951575    5244 start.go:475] detecting cgroup driver to use...
	I0116 03:14:13.951402    5244 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.115696s)
	I0116 03:14:13.951834    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:14:13.983510    5244 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0116 03:14:13.997336    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0116 03:14:14.028811    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0116 03:14:14.047585    5244 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0116 03:14:14.062895    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0116 03:14:14.095880    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 03:14:14.127578    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0116 03:14:14.160074    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0116 03:14:14.199429    5244 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0116 03:14:14.235526    5244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0116 03:14:14.268192    5244 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0116 03:14:14.285506    5244 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0116 03:14:14.304704    5244 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0116 03:14:14.336366    5244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:14:14.512177    5244 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0116 03:14:14.539302    5244 start.go:475] detecting cgroup driver to use...
	I0116 03:14:14.553541    5244 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0116 03:14:14.575224    5244 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0116 03:14:14.575224    5244 command_runner.go:130] > [Unit]
	I0116 03:14:14.575224    5244 command_runner.go:130] > Description=Docker Application Container Engine
	I0116 03:14:14.575364    5244 command_runner.go:130] > Documentation=https://docs.docker.com
	I0116 03:14:14.575364    5244 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0116 03:14:14.575364    5244 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0116 03:14:14.575364    5244 command_runner.go:130] > StartLimitBurst=3
	I0116 03:14:14.575364    5244 command_runner.go:130] > StartLimitIntervalSec=60
	I0116 03:14:14.575364    5244 command_runner.go:130] > [Service]
	I0116 03:14:14.575434    5244 command_runner.go:130] > Type=notify
	I0116 03:14:14.575434    5244 command_runner.go:130] > Restart=on-failure
	I0116 03:14:14.575434    5244 command_runner.go:130] > Environment=NO_PROXY=172.27.125.182
	I0116 03:14:14.575434    5244 command_runner.go:130] > Environment=NO_PROXY=172.27.125.182,172.27.125.77
	I0116 03:14:14.575434    5244 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0116 03:14:14.575434    5244 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0116 03:14:14.575434    5244 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0116 03:14:14.575434    5244 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0116 03:14:14.575434    5244 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0116 03:14:14.575434    5244 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0116 03:14:14.575434    5244 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0116 03:14:14.575434    5244 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0116 03:14:14.575434    5244 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0116 03:14:14.575434    5244 command_runner.go:130] > ExecStart=
	I0116 03:14:14.575434    5244 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0116 03:14:14.575434    5244 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0116 03:14:14.575434    5244 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0116 03:14:14.575434    5244 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0116 03:14:14.575434    5244 command_runner.go:130] > LimitNOFILE=infinity
	I0116 03:14:14.575434    5244 command_runner.go:130] > LimitNPROC=infinity
	I0116 03:14:14.575434    5244 command_runner.go:130] > LimitCORE=infinity
	I0116 03:14:14.575434    5244 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0116 03:14:14.575434    5244 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0116 03:14:14.575434    5244 command_runner.go:130] > TasksMax=infinity
	I0116 03:14:14.575434    5244 command_runner.go:130] > TimeoutStartSec=0
	I0116 03:14:14.575434    5244 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0116 03:14:14.575434    5244 command_runner.go:130] > Delegate=yes
	I0116 03:14:14.575434    5244 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0116 03:14:14.575434    5244 command_runner.go:130] > KillMode=process
	I0116 03:14:14.575434    5244 command_runner.go:130] > [Install]
	I0116 03:14:14.575434    5244 command_runner.go:130] > WantedBy=multi-user.target
	I0116 03:14:14.591952    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:14:14.626857    5244 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0116 03:14:14.678239    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0116 03:14:14.718084    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 03:14:14.757567    5244 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0116 03:14:14.819595    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0116 03:14:14.840045    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0116 03:14:14.870141    5244 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0116 03:14:14.886148    5244 ssh_runner.go:195] Run: which cri-dockerd
	I0116 03:14:14.891365    5244 command_runner.go:130] > /usr/bin/cri-dockerd
	I0116 03:14:14.906243    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0116 03:14:14.921279    5244 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0116 03:14:14.965996    5244 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0116 03:14:15.148838    5244 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0116 03:14:15.315903    5244 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0116 03:14:15.316901    5244 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0116 03:14:15.359494    5244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:14:15.544391    5244 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0116 03:14:17.113763    5244 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.569362s)
	I0116 03:14:17.126930    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0116 03:14:17.159201    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0116 03:14:17.196484    5244 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0116 03:14:17.370222    5244 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0116 03:14:17.558893    5244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:14:17.729662    5244 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0116 03:14:17.769077    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0116 03:14:17.801122    5244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0116 03:14:17.961622    5244 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0116 03:14:18.074276    5244 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0116 03:14:18.088562    5244 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0116 03:14:18.098250    5244 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0116 03:14:18.098250    5244 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0116 03:14:18.098434    5244 command_runner.go:130] > Device: 16h/22d	Inode: 900         Links: 1
	I0116 03:14:18.098434    5244 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0116 03:14:18.098434    5244 command_runner.go:130] > Access: 2024-01-16 03:14:17.982302875 +0000
	I0116 03:14:18.098479    5244 command_runner.go:130] > Modify: 2024-01-16 03:14:17.982302875 +0000
	I0116 03:14:18.098479    5244 command_runner.go:130] > Change: 2024-01-16 03:14:17.987302875 +0000
	I0116 03:14:18.098479    5244 command_runner.go:130] >  Birth: -
	I0116 03:14:18.099278    5244 start.go:543] Will wait 60s for crictl version
	I0116 03:14:18.114462    5244 ssh_runner.go:195] Run: which crictl
	I0116 03:14:18.119430    5244 command_runner.go:130] > /usr/bin/crictl
	I0116 03:14:18.132464    5244 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0116 03:14:18.199456    5244 command_runner.go:130] > Version:  0.1.0
	I0116 03:14:18.199456    5244 command_runner.go:130] > RuntimeName:  docker
	I0116 03:14:18.199456    5244 command_runner.go:130] > RuntimeVersion:  24.0.7
	I0116 03:14:18.199456    5244 command_runner.go:130] > RuntimeApiVersion:  v1
	I0116 03:14:18.199456    5244 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0116 03:14:18.209414    5244 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0116 03:14:18.245284    5244 command_runner.go:130] > 24.0.7
	I0116 03:14:18.255674    5244 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0116 03:14:18.285705    5244 command_runner.go:130] > 24.0.7
	I0116 03:14:18.287678    5244 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0116 03:14:18.287678    5244 out.go:177]   - env NO_PROXY=172.27.125.182
	I0116 03:14:18.288721    5244 out.go:177]   - env NO_PROXY=172.27.125.182,172.27.125.77
	I0116 03:14:18.289678    5244 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0116 03:14:18.293651    5244 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0116 03:14:18.293651    5244 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0116 03:14:18.293651    5244 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0116 03:14:18.293651    5244 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a6:4e:7e Flags:up|broadcast|multicast|running}
	I0116 03:14:18.296649    5244 ip.go:210] interface addr: fe80::d699:fcba:3e2b:1549/64
	I0116 03:14:18.296649    5244 ip.go:210] interface addr: 172.27.112.1/20
	I0116 03:14:18.312651    5244 ssh_runner.go:195] Run: grep 172.27.112.1	host.minikube.internal$ /etc/hosts
	I0116 03:14:18.318944    5244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.112.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:14:18.338409    5244 certs.go:56] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-853900 for IP: 172.27.125.42
	I0116 03:14:18.338409    5244 certs.go:190] acquiring lock for shared ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 03:14:18.339288    5244 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0116 03:14:18.339405    5244 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0116 03:14:18.339405    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0116 03:14:18.340120    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0116 03:14:18.340120    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0116 03:14:18.340120    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0116 03:14:18.341000    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem (1338 bytes)
	W0116 03:14:18.341360    5244 certs.go:433] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508_empty.pem, impossibly tiny 0 bytes
	I0116 03:14:18.341564    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0116 03:14:18.341765    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0116 03:14:18.341765    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0116 03:14:18.342294    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0116 03:14:18.342962    5244 certs.go:437] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem (1708 bytes)
	I0116 03:14:18.343230    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:18.343230    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem -> /usr/share/ca-certificates/13508.pem
	I0116 03:14:18.343609    5244 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem -> /usr/share/ca-certificates/135082.pem
	I0116 03:14:18.344438    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0116 03:14:18.385591    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0116 03:14:18.425622    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0116 03:14:18.465106    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0116 03:14:18.503847    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0116 03:14:18.543757    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\13508.pem --> /usr/share/ca-certificates/13508.pem (1338 bytes)
	I0116 03:14:18.581050    5244 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\135082.pem --> /usr/share/ca-certificates/135082.pem (1708 bytes)
	I0116 03:14:18.638122    5244 ssh_runner.go:195] Run: openssl version
	I0116 03:14:18.646088    5244 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0116 03:14:18.659983    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13508.pem && ln -fs /usr/share/ca-certificates/13508.pem /etc/ssl/certs/13508.pem"
	I0116 03:14:18.691915    5244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13508.pem
	I0116 03:14:18.699397    5244 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 16 01:53 /usr/share/ca-certificates/13508.pem
	I0116 03:14:18.699397    5244 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 16 01:53 /usr/share/ca-certificates/13508.pem
	I0116 03:14:18.714089    5244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13508.pem
	I0116 03:14:18.722019    5244 command_runner.go:130] > 51391683
	I0116 03:14:18.737017    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13508.pem /etc/ssl/certs/51391683.0"
	I0116 03:14:18.768511    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135082.pem && ln -fs /usr/share/ca-certificates/135082.pem /etc/ssl/certs/135082.pem"
	I0116 03:14:18.800551    5244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135082.pem
	I0116 03:14:18.807546    5244 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 16 01:53 /usr/share/ca-certificates/135082.pem
	I0116 03:14:18.807546    5244 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 16 01:53 /usr/share/ca-certificates/135082.pem
	I0116 03:14:18.821905    5244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135082.pem
	I0116 03:14:18.829773    5244 command_runner.go:130] > 3ec20f2e
	I0116 03:14:18.844736    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/135082.pem /etc/ssl/certs/3ec20f2e.0"
	I0116 03:14:18.875756    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0116 03:14:18.906808    5244 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:18.913605    5244 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 16 01:40 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:18.913605    5244 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 16 01:40 /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:18.928345    5244 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0116 03:14:18.935976    5244 command_runner.go:130] > b5213941
	I0116 03:14:18.949628    5244 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0116 03:14:18.979431    5244 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0116 03:14:18.984835    5244 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:14:18.984835    5244 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0116 03:14:18.995526    5244 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0116 03:14:19.029474    5244 command_runner.go:130] > cgroupfs
	I0116 03:14:19.030484    5244 cni.go:84] Creating CNI manager for ""
	I0116 03:14:19.030643    5244 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:14:19.030643    5244 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0116 03:14:19.030764    5244 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.125.42 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-853900 NodeName:multinode-853900-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.125.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.125.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0116 03:14:19.030969    5244 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.125.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-853900-m03"
	  kubeletExtraArgs:
	    node-ip: 172.27.125.42
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.125.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0116 03:14:19.031054    5244 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-853900-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.125.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0116 03:14:19.045339    5244 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0116 03:14:19.060016    5244 command_runner.go:130] > kubeadm
	I0116 03:14:19.060577    5244 command_runner.go:130] > kubectl
	I0116 03:14:19.060577    5244 command_runner.go:130] > kubelet
	I0116 03:14:19.060577    5244 binaries.go:44] Found k8s binaries, skipping transfer
	I0116 03:14:19.079886    5244 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0116 03:14:19.095914    5244 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0116 03:14:19.120309    5244 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0116 03:14:19.160151    5244 ssh_runner.go:195] Run: grep 172.27.125.182	control-plane.minikube.internal$ /etc/hosts
	I0116 03:14:19.168392    5244 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.125.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0116 03:14:19.187097    5244 host.go:66] Checking if "multinode-853900" exists ...
	I0116 03:14:19.187887    5244 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:14:19.188151    5244 start.go:304] JoinCluster: &{Name:multinode-853900 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-853900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.125.182 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.125.77 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.125.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 03:14:19.188420    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0116 03:14:19.188577    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:14:21.358730    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:14:21.358730    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:14:21.358827    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:14:23.908071    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:14:23.908071    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:14:23.908479    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 03:14:24.122369    5244 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token znrj04.osocdpjlpsjybfwf --discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 
	I0116 03:14:24.122405    5244 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9339523s)
	I0116 03:14:24.122405    5244 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.27.125.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 03:14:24.122405    5244 host.go:66] Checking if "multinode-853900" exists ...
	I0116 03:14:24.136533    5244 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-853900-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0116 03:14:24.137532    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:14:26.326484    5244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:14:26.326755    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:14:26.326860    5244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:14:28.849053    5244 main.go:141] libmachine: [stdout =====>] : 172.27.125.182
	
	I0116 03:14:28.849325    5244 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:14:28.849807    5244 sshutil.go:53] new ssh client: &{IP:172.27.125.182 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 03:14:29.033845    5244 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0116 03:14:29.103126    5244 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-b8hwf, kube-system/kube-proxy-rfglr
	I0116 03:14:29.104406    5244 command_runner.go:130] > node/multinode-853900-m03 cordoned
	I0116 03:14:29.104406    5244 command_runner.go:130] > node/multinode-853900-m03 drained
	I0116 03:14:29.105235    5244 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-853900-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (4.9686091s)
	I0116 03:14:29.105235    5244 node.go:108] successfully drained node "m03"
	I0116 03:14:29.106280    5244 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:14:29.107462    5244 kapi.go:59] client config for multinode-853900: &rest.Config{Host:"https://172.27.125.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x270c520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:14:29.108797    5244 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0116 03:14:29.108928    5244 round_trippers.go:463] DELETE https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:29.108962    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:29.108962    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:29.109007    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:29.109007    5244 round_trippers.go:473]     Content-Type: application/json
	I0116 03:14:29.126883    5244 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0116 03:14:29.126883    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:29.127177    5244 round_trippers.go:580]     Audit-Id: 11441305-cd63-4af7-8b19-d46c95dc8de7
	I0116 03:14:29.127177    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:29.127177    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:29.127177    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:29.127177    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:29.127177    5244 round_trippers.go:580]     Content-Length: 171
	I0116 03:14:29.127177    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:29 GMT
	I0116 03:14:29.127177    5244 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-853900-m03","kind":"nodes","uid":"99b99df3-ad5b-4c59-a7a0-406b850f5433"}}
	I0116 03:14:29.127302    5244 node.go:124] successfully deleted node "m03"
	I0116 03:14:29.127367    5244 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.27.125.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 03:14:29.127412    5244 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.27.125.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 03:14:29.127448    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token znrj04.osocdpjlpsjybfwf --discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-853900-m03"
	I0116 03:14:29.446225    5244 command_runner.go:130] ! W0116 03:14:29.448311    1361 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0116 03:14:30.151975    5244 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0116 03:14:31.955855    5244 command_runner.go:130] > [preflight] Running pre-flight checks
	I0116 03:14:31.955855    5244 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0116 03:14:31.955855    5244 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0116 03:14:31.955855    5244 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0116 03:14:31.955855    5244 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0116 03:14:31.955855    5244 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0116 03:14:31.955990    5244 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0116 03:14:31.955990    5244 command_runner.go:130] > This node has joined the cluster:
	I0116 03:14:31.955990    5244 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0116 03:14:31.955990    5244 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0116 03:14:31.956041    5244 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0116 03:14:31.956041    5244 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token znrj04.osocdpjlpsjybfwf --discovery-token-ca-cert-hash sha256:66ef9a38e06c175fa30850fd5c63399966a4115300a5c161cb370d2d951391b9 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-853900-m03": (2.8285746s)
	I0116 03:14:31.956041    5244 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0116 03:14:32.148991    5244 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0116 03:14:32.323622    5244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd minikube.k8s.io/name=multinode-853900 minikube.k8s.io/updated_at=2024_01_16T03_14_32_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0116 03:14:32.799397    5244 command_runner.go:130] > node/multinode-853900-m02 labeled
	I0116 03:14:32.799397    5244 command_runner.go:130] > node/multinode-853900-m03 labeled
	I0116 03:14:32.799397    5244 start.go:306] JoinCluster complete in 13.611156s
	I0116 03:14:32.799397    5244 cni.go:84] Creating CNI manager for ""
	I0116 03:14:32.799397    5244 cni.go:136] 3 nodes found, recommending kindnet
	I0116 03:14:32.814971    5244 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0116 03:14:32.822985    5244 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0116 03:14:32.822985    5244 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0116 03:14:32.822985    5244 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0116 03:14:32.822985    5244 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0116 03:14:32.822985    5244 command_runner.go:130] > Access: 2024-01-16 03:08:31.256896000 +0000
	I0116 03:14:32.822985    5244 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0116 03:14:32.822985    5244 command_runner.go:130] > Change: 2024-01-16 03:08:20.148000000 +0000
	I0116 03:14:32.822985    5244 command_runner.go:130] >  Birth: -
	I0116 03:14:32.822985    5244 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0116 03:14:32.822985    5244 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0116 03:14:32.867967    5244 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0116 03:14:33.283306    5244 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:14:33.283306    5244 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0116 03:14:33.283306    5244 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0116 03:14:33.283306    5244 command_runner.go:130] > daemonset.apps/kindnet configured
	I0116 03:14:33.284266    5244 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:14:33.285292    5244 kapi.go:59] client config for multinode-853900: &rest.Config{Host:"https://172.27.125.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x270c520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:14:33.286307    5244 round_trippers.go:463] GET https://172.27.125.182:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0116 03:14:33.286307    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:33.286307    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:33.286307    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:33.290294    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:33.290674    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:33.290736    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:33.290736    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:33.290736    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:33.290736    5244 round_trippers.go:580]     Content-Length: 292
	I0116 03:14:33.290736    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:33 GMT
	I0116 03:14:33.290736    5244 round_trippers.go:580]     Audit-Id: 5d5101d0-80c6-4ffb-9b5b-f80a7b38e909
	I0116 03:14:33.290736    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:33.290736    5244 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"bb9e4be9-a821-417a-b943-b930d6cec07c","resourceVersion":"1765","creationTimestamp":"2024-01-16T02:48:09Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0116 03:14:33.290736    5244 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-853900" context rescaled to 1 replicas
	I0116 03:14:33.290736    5244 start.go:223] Will wait 6m0s for node &{Name:m03 IP:172.27.125.42 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0116 03:14:33.291521    5244 out.go:177] * Verifying Kubernetes components...
	I0116 03:14:33.306267    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:14:33.328680    5244 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 03:14:33.329487    5244 kapi.go:59] client config for multinode-853900: &rest.Config{Host:"https://172.27.125.182:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-853900\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x270c520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0116 03:14:33.330533    5244 node_ready.go:35] waiting up to 6m0s for node "multinode-853900-m03" to be "Ready" ...
	I0116 03:14:33.330533    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:33.330533    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:33.330533    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:33.330533    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:33.335500    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:33.335500    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:33.335500    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:33.335500    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:33.335500    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:33.335500    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:33 GMT
	I0116 03:14:33.335500    5244 round_trippers.go:580]     Audit-Id: c2bc88b9-60ae-453c-89c9-8f157c485230
	I0116 03:14:33.335500    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:33.336029    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2119","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3612 chars]
	I0116 03:14:33.844195    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:33.844195    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:33.844195    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:33.844195    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:33.853152    5244 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0116 03:14:33.853152    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:33.853152    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:33.853152    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:33 GMT
	I0116 03:14:33.853152    5244 round_trippers.go:580]     Audit-Id: 0a5361ee-a21e-416a-bd19-9a75336694c3
	I0116 03:14:33.853152    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:33.853152    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:33.853152    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:33.854164    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2119","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:me
tadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec [truncated 3612 chars]
	I0116 03:14:34.334561    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:34.334639    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:34.334639    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:34.334716    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:35.049164    5244 round_trippers.go:574] Response Status: 200 OK in 714 milliseconds
	I0116 03:14:35.049164    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:35.049750    5244 round_trippers.go:580]     Audit-Id: b3a1a401-2449-48c3-9039-62527a0f39d0
	I0116 03:14:35.049750    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:35.049750    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:35.049750    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:35.049750    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:35.049750    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:35 GMT
	I0116 03:14:35.050591    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:35.050591    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:35.050591    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:35.051238    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:35.051238    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:35.056518    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:35.056518    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:35.056518    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:35.056518    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:35.056518    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:35 GMT
	I0116 03:14:35.056518    5244 round_trippers.go:580]     Audit-Id: fd55a6da-98f3-4e13-b6c7-6398127cd813
	I0116 03:14:35.056518    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:35.056518    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:35.056518    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:35.333869    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:35.333869    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:35.333869    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:35.333869    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:35.341157    5244 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 03:14:35.341157    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:35.341157    5244 round_trippers.go:580]     Audit-Id: f4d97efd-796a-4509-9411-7bf667674dcd
	I0116 03:14:35.341157    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:35.341157    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:35.341157    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:35.341157    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:35.341157    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:35 GMT
	I0116 03:14:35.341157    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:35.341878    5244 node_ready.go:58] node "multinode-853900-m03" has status "Ready":"False"
	I0116 03:14:35.835192    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:35.835192    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:35.835192    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:35.835192    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:35.839953    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:35.840011    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:35.840011    5244 round_trippers.go:580]     Audit-Id: 4a1cc80a-2adc-4a8c-8229-9380b5f190e8
	I0116 03:14:35.840011    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:35.840011    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:35.840093    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:35.840093    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:35.840093    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:35 GMT
	I0116 03:14:35.840353    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:36.339156    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:36.339156    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:36.339156    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:36.339156    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:36.342752    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:36.342752    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:36.342752    5244 round_trippers.go:580]     Audit-Id: 5d7aee21-9c13-44d0-83d3-5f6c940c089b
	I0116 03:14:36.342752    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:36.342752    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:36.342752    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:36.342752    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:36.342752    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:36 GMT
	I0116 03:14:36.343611    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:36.844849    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:36.844849    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:36.844849    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:36.844849    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:36.848899    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:36.849449    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:36.849516    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:36.849516    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:36.849516    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:36 GMT
	I0116 03:14:36.849516    5244 round_trippers.go:580]     Audit-Id: 010dd635-2f09-4b66-b637-33673030fc1b
	I0116 03:14:36.849516    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:36.849516    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:36.849969    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:37.344594    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:37.344594    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:37.344594    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:37.344594    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:37.352737    5244 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0116 03:14:37.352737    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:37.352737    5244 round_trippers.go:580]     Audit-Id: 9a82623c-e0d5-4d46-bee2-9e23bba309d5
	I0116 03:14:37.352737    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:37.352737    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:37.352737    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:37.353271    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:37.353271    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:37 GMT
	I0116 03:14:37.353355    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:37.353940    5244 node_ready.go:58] node "multinode-853900-m03" has status "Ready":"False"
	I0116 03:14:37.845614    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:37.845696    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:37.845696    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:37.845728    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:37.849596    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:37.849596    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:37.849596    5244 round_trippers.go:580]     Audit-Id: 58e41ecf-5818-43f3-aa1b-e766277461d0
	I0116 03:14:37.850354    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:37.850354    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:37.850394    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:37.850529    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:37.850529    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:37 GMT
	I0116 03:14:37.850748    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:38.342284    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:38.342284    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:38.342284    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:38.342284    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:38.345969    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:38.346756    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:38.346756    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:38.346756    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:38.346756    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:38.346756    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:38 GMT
	I0116 03:14:38.346756    5244 round_trippers.go:580]     Audit-Id: e6495c8b-a792-421f-bef6-5bc42a0da4b5
	I0116 03:14:38.346756    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:38.346756    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:38.842607    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:38.842665    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:38.842665    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:38.842665    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:38.847198    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:38.847429    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:38.847429    5244 round_trippers.go:580]     Audit-Id: 4a2f8fa4-739a-488a-8b1d-53956dacde8c
	I0116 03:14:38.847429    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:38.847429    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:38.847429    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:38.847590    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:38.847590    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:38 GMT
	I0116 03:14:38.847805    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:39.344761    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:39.344761    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:39.344761    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:39.344761    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:39.349346    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:39.349382    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:39.349382    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:39 GMT
	I0116 03:14:39.349382    5244 round_trippers.go:580]     Audit-Id: 80a8e1bb-34d2-4c22-a274-7a80125ba2bd
	I0116 03:14:39.349382    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:39.349382    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:39.349382    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:39.349382    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:39.349680    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:39.835331    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:39.835399    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:39.835399    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:39.835399    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:39.839184    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:39.839184    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:39.839184    5244 round_trippers.go:580]     Audit-Id: 704b74fb-3781-490e-bd33-1f1b3f13d530
	I0116 03:14:39.839184    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:39.839184    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:39.839184    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:39.839686    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:39.839686    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:39 GMT
	I0116 03:14:39.839830    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:39.840266    5244 node_ready.go:58] node "multinode-853900-m03" has status "Ready":"False"
	I0116 03:14:40.339496    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:40.339556    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:40.339556    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:40.339556    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:40.343400    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:40.343400    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:40.344275    5244 round_trippers.go:580]     Audit-Id: 37f7dfe2-02db-47b6-9cb2-80f909674a2a
	I0116 03:14:40.344275    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:40.344275    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:40.344275    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:40.344275    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:40.344275    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:40 GMT
	I0116 03:14:40.344545    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:40.841199    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:40.841289    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:40.841289    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:40.841289    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:40.845822    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:40.845822    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:40.845822    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:40.845822    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:40.845822    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:40.846090    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:40.846090    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:40 GMT
	I0116 03:14:40.846090    5244 round_trippers.go:580]     Audit-Id: 01c1d172-d64a-4f2c-bd34-af3338811d92
	I0116 03:14:40.846234    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2129","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3721 chars]
	I0116 03:14:41.331151    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:41.331228    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:41.331228    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:41.331228    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:41.336420    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:41.336420    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:41.336420    5244 round_trippers.go:580]     Audit-Id: 004276ba-69d3-40e1-afa7-ba18bc964ba9
	I0116 03:14:41.336420    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:41.336420    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:41.336420    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:41.336420    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:41.336420    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:41 GMT
	I0116 03:14:41.336420    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:41.831710    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:41.831926    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:41.831926    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:41.831926    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:41.835393    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:41.835393    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:41.835393    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:41.835393    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:41.835393    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:41.836458    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:41 GMT
	I0116 03:14:41.836458    5244 round_trippers.go:580]     Audit-Id: 5fcf988e-0e87-4061-8767-2f1748cf7a61
	I0116 03:14:41.836513    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:41.836739    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:42.332676    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:42.332676    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:42.332757    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:42.332757    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:42.340110    5244 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 03:14:42.340110    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:42.340110    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:42.340110    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:42.340110    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:42.340110    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:42 GMT
	I0116 03:14:42.340110    5244 round_trippers.go:580]     Audit-Id: 9692fdac-a2ac-4c10-b1e7-8d68cf31bc72
	I0116 03:14:42.340110    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:42.340493    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:42.340652    5244 node_ready.go:58] node "multinode-853900-m03" has status "Ready":"False"
	I0116 03:14:42.831998    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:42.832107    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:42.832107    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:42.832107    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:42.837485    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:14:42.837485    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:42.837754    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:42 GMT
	I0116 03:14:42.837754    5244 round_trippers.go:580]     Audit-Id: caa21902-bd85-47f3-8487-8c93b550a900
	I0116 03:14:42.837754    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:42.837754    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:42.837754    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:42.837754    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:42.838142    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:43.335919    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:43.336110    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:43.336110    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:43.336110    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:43.341753    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:14:43.341753    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:43.341945    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:43.341945    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:43.341945    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:43 GMT
	I0116 03:14:43.341945    5244 round_trippers.go:580]     Audit-Id: 95c687aa-7c38-440d-9a6c-64a84a486fcc
	I0116 03:14:43.341945    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:43.341945    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:43.342121    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:43.836492    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:43.836557    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:43.836557    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:43.836557    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:43.840444    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:43.841347    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:43.841347    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:43.841347    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:43.841347    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:43 GMT
	I0116 03:14:43.841347    5244 round_trippers.go:580]     Audit-Id: ad07185f-ad81-4aaa-a7ec-95e592aeac1d
	I0116 03:14:43.841347    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:43.841347    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:43.841616    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:44.337316    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:44.337316    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:44.337316    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:44.337316    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:44.341116    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:44.342317    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:44.342317    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:44.342317    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:44 GMT
	I0116 03:14:44.342317    5244 round_trippers.go:580]     Audit-Id: 3e441b22-5235-4b9a-a3d8-9fe720464bbc
	I0116 03:14:44.342317    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:44.342317    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:44.342317    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:44.342317    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:44.343084    5244 node_ready.go:58] node "multinode-853900-m03" has status "Ready":"False"
	I0116 03:14:44.838177    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:44.838177    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:44.838177    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:44.838177    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:44.843837    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:14:44.843837    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:44.843837    5244 round_trippers.go:580]     Audit-Id: 7dc6b8e1-41b6-4ed7-b356-d2e8c2184d24
	I0116 03:14:44.843837    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:44.844140    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:44.844140    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:44.844140    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:44.844259    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:44 GMT
	I0116 03:14:44.844259    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:45.338494    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:45.338609    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:45.338609    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:45.338664    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:45.342550    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:45.342550    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:45.342550    5244 round_trippers.go:580]     Audit-Id: 085bfe9c-29c3-45ac-93d4-0833c16ff3c1
	I0116 03:14:45.343345    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:45.343345    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:45.343345    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:45.343345    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:45.343345    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:45 GMT
	I0116 03:14:45.343781    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:45.840529    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:45.840608    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:45.840608    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:45.840608    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:45.848722    5244 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0116 03:14:45.848751    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:45.848794    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:45.848818    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:45.848818    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:45.848818    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:45.848818    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:45 GMT
	I0116 03:14:45.848818    5244 round_trippers.go:580]     Audit-Id: e2dfc0d7-4208-400f-b7e8-28179bf5d58d
	I0116 03:14:45.848949    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:46.343493    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:46.343550    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:46.343550    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:46.343625    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:46.346946    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:46.346946    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:46.346946    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:46.347947    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:46.347947    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:46.347947    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:46 GMT
	I0116 03:14:46.347947    5244 round_trippers.go:580]     Audit-Id: a43bcf6f-5e79-4f19-a1da-68ab58eecfe3
	I0116 03:14:46.347947    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:46.347947    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:46.347947    5244 node_ready.go:58] node "multinode-853900-m03" has status "Ready":"False"
	I0116 03:14:46.833656    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:46.833748    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:46.833839    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:46.833839    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:46.842997    5244 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0116 03:14:46.842997    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:46.842997    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:46.842997    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:46 GMT
	I0116 03:14:46.842997    5244 round_trippers.go:580]     Audit-Id: 733c00a9-5cb0-477e-90f9-5f64beea565c
	I0116 03:14:46.842997    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:46.842997    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:46.842997    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:46.842997    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:47.335332    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:47.335332    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:47.335469    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:47.335469    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:47.338653    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:47.339746    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:47.339746    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:47 GMT
	I0116 03:14:47.339779    5244 round_trippers.go:580]     Audit-Id: cc1c85ea-f156-4d4f-83cd-a9a4c53c2be7
	I0116 03:14:47.339779    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:47.339779    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:47.339779    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:47.339779    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:47.339925    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:47.836045    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:47.836127    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:47.836127    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:47.836127    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:47.840063    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:47.841064    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:47.841140    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:47 GMT
	I0116 03:14:47.841191    5244 round_trippers.go:580]     Audit-Id: 54d92012-379e-4e1a-8488-9347c7221728
	I0116 03:14:47.841191    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:47.841191    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:47.841191    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:47.841191    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:47.841191    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:48.335816    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:48.335911    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:48.335911    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:48.335911    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:48.340222    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:48.340222    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:48.340222    5244 round_trippers.go:580]     Audit-Id: 56eac54e-6871-4098-ad99-27963a7d0d9b
	I0116 03:14:48.340222    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:48.340222    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:48.340222    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:48.340551    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:48.340551    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:48 GMT
	I0116 03:14:48.340658    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:48.838738    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:48.838738    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:48.838738    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:48.838738    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:48.842974    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:48.842974    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:48.843201    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:48.843201    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:48.843239    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:48.843239    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:48.843239    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:48 GMT
	I0116 03:14:48.843239    5244 round_trippers.go:580]     Audit-Id: c91dc705-e6c1-4c75-b1e6-bb08acdc523d
	I0116 03:14:48.843461    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:48.843519    5244 node_ready.go:58] node "multinode-853900-m03" has status "Ready":"False"
	I0116 03:14:49.342352    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:49.342472    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:49.342472    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:49.342472    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:49.346553    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:49.346553    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:49.346553    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:49.346553    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:49.346553    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:49.346553    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:49 GMT
	I0116 03:14:49.346553    5244 round_trippers.go:580]     Audit-Id: c57e382d-160e-4b97-ad78-b58cccc6f999
	I0116 03:14:49.346745    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:49.346811    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:49.842581    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:49.842635    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:49.842635    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:49.842635    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:49.846349    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:49.847233    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:49.847233    5244 round_trippers.go:580]     Audit-Id: c8817ae7-f457-474a-9686-9de40eeb3b36
	I0116 03:14:49.847233    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:49.847310    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:49.847310    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:49.847310    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:49.847310    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:49 GMT
	I0116 03:14:49.847433    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:50.342035    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:50.342117    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:50.342117    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:50.342117    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:50.346540    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:50.346540    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:50.346540    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:50.346540    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:50.346540    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:50.346540    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:50.346540    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:50 GMT
	I0116 03:14:50.346540    5244 round_trippers.go:580]     Audit-Id: e3b8cedb-5c67-4352-aea5-7c7657ef993a
	I0116 03:14:50.346540    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:50.845007    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:50.845073    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:50.845134    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:50.845134    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:50.848759    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:50.848759    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:50.848759    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:50.848759    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:50.848759    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:50 GMT
	I0116 03:14:50.848759    5244 round_trippers.go:580]     Audit-Id: db71d9c1-b995-4567-90f6-93061758d26a
	I0116 03:14:50.848759    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:50.848759    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:50.849844    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:50.849844    5244 node_ready.go:58] node "multinode-853900-m03" has status "Ready":"False"
	I0116 03:14:51.334338    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:51.334338    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:51.334424    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:51.334424    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:51.338846    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:51.338846    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:51.338846    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:51.339364    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:51.339364    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:51.339364    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:51 GMT
	I0116 03:14:51.339364    5244 round_trippers.go:580]     Audit-Id: 87c38687-8e1d-4b4b-9644-40d042c83ae0
	I0116 03:14:51.339364    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:51.339656    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:51.833437    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:51.833437    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:51.833499    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:51.833499    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:51.837247    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:51.837658    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:51.837658    5244 round_trippers.go:580]     Audit-Id: 2d0ced06-fb18-401f-a7df-e94b41dc737c
	I0116 03:14:51.837658    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:51.837658    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:51.837658    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:51.837658    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:51.837658    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:51 GMT
	I0116 03:14:51.837857    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:52.335475    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:52.335475    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:52.335475    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:52.335589    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:52.340217    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:52.340217    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:52.340217    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:52.340217    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:52 GMT
	I0116 03:14:52.340724    5244 round_trippers.go:580]     Audit-Id: 025673a2-1be1-4b3c-ab60-ba64bbb52b4e
	I0116 03:14:52.340724    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:52.340724    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:52.340724    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:52.340860    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:52.836528    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:52.836583    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:52.836583    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:52.836583    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:52.840947    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:52.840947    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:52.841706    5244 round_trippers.go:580]     Audit-Id: 42ab089e-4c0f-4413-a662-6b467341aa67
	I0116 03:14:52.841706    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:52.841706    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:52.841706    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:52.841706    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:52.841706    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:52 GMT
	I0116 03:14:52.842050    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2136","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3890 chars]
	I0116 03:14:53.338087    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:53.338183    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:53.338243    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:53.338243    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:53.346085    5244 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 03:14:53.346116    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:53.346116    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:53 GMT
	I0116 03:14:53.346116    5244 round_trippers.go:580]     Audit-Id: 7b4bbe4c-65ee-47ac-91fd-6963474bdeb6
	I0116 03:14:53.346116    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:53.346175    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:53.346175    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:53.346175    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:53.346175    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2156","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3756 chars]
	I0116 03:14:53.346850    5244 node_ready.go:49] node "multinode-853900-m03" has status "Ready":"True"
	I0116 03:14:53.346850    5244 node_ready.go:38] duration metric: took 20.0161846s waiting for node "multinode-853900-m03" to be "Ready" ...
	I0116 03:14:53.346850    5244 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:14:53.346850    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods
	I0116 03:14:53.346850    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:53.346850    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:53.346850    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:53.352536    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:14:53.352536    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:53.352536    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:53 GMT
	I0116 03:14:53.352536    5244 round_trippers.go:580]     Audit-Id: dadb2a55-4553-4ca8-8b22-5c435c428eb9
	I0116 03:14:53.352536    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:53.352536    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:53.352536    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:53.352536    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:53.355592    5244 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2157"},"items":[{"metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"1761","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82967 chars]
	I0116 03:14:53.359060    5244 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:53.359297    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-62jpz
	I0116 03:14:53.359297    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:53.359297    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:53.359399    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:53.362537    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:53.362537    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:53.363502    5244 round_trippers.go:580]     Audit-Id: 6d75fb3c-d3cc-49cb-b94a-2723509d7022
	I0116 03:14:53.363502    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:53.363502    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:53.363502    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:53.363502    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:53.363502    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:53 GMT
	I0116 03:14:53.363731    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-62jpz","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c028c1eb-0071-40bf-a163-6f71a10dc945","resourceVersion":"1761","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"4e1fa6fc-07be-46ff-9c4b-c00986feafb1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1fa6fc-07be-46ff-9c4b-c00986feafb1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0116 03:14:53.363925    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:14:53.363925    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:53.363925    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:53.363925    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:53.370650    5244 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 03:14:53.370780    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:53.370780    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:53.370853    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:53.370853    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:53.370853    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:53.370853    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:53 GMT
	I0116 03:14:53.370853    5244 round_trippers.go:580]     Audit-Id: a3784f7d-6626-4aeb-b57a-b9dba1ac247b
	I0116 03:14:53.370853    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0116 03:14:53.371606    5244 pod_ready.go:92] pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:53.371606    5244 pod_ready.go:81] duration metric: took 12.3974ms waiting for pod "coredns-5dd5756b68-62jpz" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:53.371606    5244 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:53.371606    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-853900
	I0116 03:14:53.371606    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:53.371606    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:53.371606    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:53.374195    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:14:53.375216    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:53.375216    5244 round_trippers.go:580]     Audit-Id: 2d8b4d6a-abfa-47d9-9b82-33e869e6682a
	I0116 03:14:53.375216    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:53.375216    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:53.375216    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:53.375216    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:53.375216    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:53 GMT
	I0116 03:14:53.375216    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-853900","namespace":"kube-system","uid":"0830a000-5e72-4c45-a843-1dd557d188eb","resourceVersion":"1718","creationTimestamp":"2024-01-16T03:09:56Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.125.182:2379","kubernetes.io/config.hash":"69d98d086aafe436cd9405e0584ec9d9","kubernetes.io/config.mirror":"69d98d086aafe436cd9405e0584ec9d9","kubernetes.io/config.seen":"2024-01-16T03:09:50.494161665Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:09:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0116 03:14:53.375216    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:14:53.375980    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:53.375980    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:53.375980    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:53.383073    5244 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 03:14:53.383073    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:53.383073    5244 round_trippers.go:580]     Audit-Id: 434a7378-3225-4628-85d2-e2a50d775815
	I0116 03:14:53.383073    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:53.383073    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:53.383073    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:53.383073    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:53.383073    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:53 GMT
	I0116 03:14:53.383073    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0116 03:14:53.383073    5244 pod_ready.go:92] pod "etcd-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:53.384077    5244 pod_ready.go:81] duration metric: took 12.4711ms waiting for pod "etcd-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:53.384077    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:53.384077    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-853900
	I0116 03:14:53.384077    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:53.384077    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:53.384077    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:53.388092    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:53.388092    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:53.388092    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:53.388092    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:53.388159    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:53.388159    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:53 GMT
	I0116 03:14:53.388159    5244 round_trippers.go:580]     Audit-Id: f9f54d14-6b31-4f15-b067-8914c5366bfc
	I0116 03:14:53.388159    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:53.388159    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-853900","namespace":"kube-system","uid":"cb2bb8c0-e51a-46cf-87f4-5c3ad0287455","resourceVersion":"1722","creationTimestamp":"2024-01-16T03:10:01Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.125.182:8443","kubernetes.io/config.hash":"e8b156384a67a45d4dc14390f3884653","kubernetes.io/config.mirror":"e8b156384a67a45d4dc14390f3884653","kubernetes.io/config.seen":"2024-01-16T03:09:50.494166665Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:10:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0116 03:14:53.388930    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:14:53.388930    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:53.388930    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:53.388930    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:53.391500    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:14:53.391500    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:53.392469    5244 round_trippers.go:580]     Audit-Id: dedc5bd3-beec-4a56-93d3-8423fb0e17ae
	I0116 03:14:53.392469    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:53.392498    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:53.392498    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:53.392498    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:53.392498    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:53 GMT
	I0116 03:14:53.392554    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0116 03:14:53.393344    5244 pod_ready.go:92] pod "kube-apiserver-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:53.393344    5244 pod_ready.go:81] duration metric: took 9.2668ms waiting for pod "kube-apiserver-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:53.393344    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:53.393344    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-853900
	I0116 03:14:53.393344    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:53.393344    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:53.393344    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:53.396916    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:53.396916    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:53.396916    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:53 GMT
	I0116 03:14:53.396916    5244 round_trippers.go:580]     Audit-Id: dd10ad8a-0e01-41f5-bfae-76c2f26bfef0
	I0116 03:14:53.396916    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:53.396916    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:53.396916    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:53.396916    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:53.396916    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-853900","namespace":"kube-system","uid":"5a4d4e86-9836-401a-8d98-1519ff75a0ec","resourceVersion":"1746","creationTimestamp":"2024-01-16T02:48:08Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f09e1ab837c9ef5b247e4d57afe8993b","kubernetes.io/config.mirror":"f09e1ab837c9ef5b247e4d57afe8993b","kubernetes.io/config.seen":"2024-01-16T02:48:00.146129509Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0116 03:14:53.397912    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:14:53.397912    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:53.397912    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:53.397912    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:53.400911    5244 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0116 03:14:53.401265    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:53.401265    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:53.401355    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:53.401355    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:53.401355    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:53.401355    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:53 GMT
	I0116 03:14:53.401355    5244 round_trippers.go:580]     Audit-Id: ae708c4a-1196-4b41-a95a-88989f20a834
	I0116 03:14:53.401355    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0116 03:14:53.401885    5244 pod_ready.go:92] pod "kube-controller-manager-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:53.401885    5244 pod_ready.go:81] duration metric: took 8.541ms waiting for pod "kube-controller-manager-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:53.401885    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h977r" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:53.541193    5244 request.go:629] Waited for 138.8249ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h977r
	I0116 03:14:53.541641    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h977r
	I0116 03:14:53.541641    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:53.541641    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:53.541641    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:53.546049    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:53.546090    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:53.546090    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:53.546090    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:53 GMT
	I0116 03:14:53.546090    5244 round_trippers.go:580]     Audit-Id: 0a12b68d-e8b1-4072-99f7-aaa10743e291
	I0116 03:14:53.546090    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:53.546090    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:53.546090    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:53.546090    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h977r","generateName":"kube-proxy-","namespace":"kube-system","uid":"5434ef27-d483-46c1-a95d-bd86163ee965","resourceVersion":"1943","creationTimestamp":"2024-01-16T02:51:11Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:51:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0116 03:14:53.745073    5244 request.go:629] Waited for 197.131ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:14:53.745196    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m02
	I0116 03:14:53.745196    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:53.745196    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:53.745196    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:53.752602    5244 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0116 03:14:53.752602    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:53.752602    5244 round_trippers.go:580]     Audit-Id: 15650ab3-45c5-4c84-a945-fb61267ff26f
	I0116 03:14:53.752602    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:53.752602    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:53.752602    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:53.752602    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:53.752602    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:53 GMT
	I0116 03:14:53.753394    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m02","uid":"981a8ec0-39e6-4db0-bb4b-bd8a60f20c5d","resourceVersion":"2118","creationTimestamp":"2024-01-16T03:12:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:12:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3805 chars]
	I0116 03:14:53.754160    5244 pod_ready.go:92] pod "kube-proxy-h977r" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:53.754189    5244 pod_ready.go:81] duration metric: took 352.3021ms waiting for pod "kube-proxy-h977r" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:53.754254    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rfglr" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:53.948308    5244 request.go:629] Waited for 193.8241ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfglr
	I0116 03:14:53.948822    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rfglr
	I0116 03:14:53.948822    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:53.948822    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:53.948822    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:53.953658    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:53.953658    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:53.953658    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:53.953658    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:53.953658    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:53.953658    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:53.953658    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:53 GMT
	I0116 03:14:53.953658    5244 round_trippers.go:580]     Audit-Id: 0217b347-386d-468a-8180-c17defb93bf4
	I0116 03:14:53.954416    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rfglr","generateName":"kube-proxy-","namespace":"kube-system","uid":"80452c87-583e-40d7-aec9-4c790772a538","resourceVersion":"2125","creationTimestamp":"2024-01-16T02:55:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:55:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0116 03:14:54.151789    5244 request.go:629] Waited for 196.5851ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:54.151983    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900-m03
	I0116 03:14:54.151983    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:54.151983    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:54.151983    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:54.158431    5244 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 03:14:54.158431    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:54.158431    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:54.158431    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:54.158431    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:54.158431    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:54 GMT
	I0116 03:14:54.158431    5244 round_trippers.go:580]     Audit-Id: acb0a9e8-ca42-4871-8bc9-9ff9d26c811c
	I0116 03:14:54.158431    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:54.158431    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900-m03","uid":"d5d62422-f9f7-44ce-a360-e54142599a61","resourceVersion":"2156","creationTimestamp":"2024-01-16T03:14:30Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_16T03_14_32_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T03:14:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:anno
tations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-det [truncated 3756 chars]
	I0116 03:14:54.159216    5244 pod_ready.go:92] pod "kube-proxy-rfglr" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:54.159216    5244 pod_ready.go:81] duration metric: took 404.9597ms waiting for pod "kube-proxy-rfglr" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:54.159216    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tpc2g" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:54.353358    5244 request.go:629] Waited for 194.0571ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tpc2g
	I0116 03:14:54.353596    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tpc2g
	I0116 03:14:54.353596    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:54.353596    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:54.353596    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:54.357177    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:54.357177    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:54.357177    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:54.357177    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:54.357177    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:54 GMT
	I0116 03:14:54.357177    5244 round_trippers.go:580]     Audit-Id: acf3993d-d71d-42a5-a8f3-242fe7837df0
	I0116 03:14:54.357177    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:54.357782    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:54.358165    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-tpc2g","generateName":"kube-proxy-","namespace":"kube-system","uid":"0cb279ef-9d3a-4c55-9c57-ce7eede8a052","resourceVersion":"1708","creationTimestamp":"2024-01-16T02:48:21Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e9c82e82-71b7-4edb-bfe7-6d9575f3c9e9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0116 03:14:54.540026    5244 request.go:629] Waited for 181.0351ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:14:54.540190    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:14:54.540284    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:54.540284    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:54.540284    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:54.547054    5244 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0116 03:14:54.547054    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:54.547054    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:54.547054    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:54.547054    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:54.547054    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:54.547054    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:54 GMT
	I0116 03:14:54.547054    5244 round_trippers.go:580]     Audit-Id: e1d2e391-b1c0-4edf-ba6c-f3be077b871d
	I0116 03:14:54.547054    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0116 03:14:54.547877    5244 pod_ready.go:92] pod "kube-proxy-tpc2g" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:54.547877    5244 pod_ready.go:81] duration metric: took 388.6586ms waiting for pod "kube-proxy-tpc2g" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:54.547877    5244 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:54.745283    5244 request.go:629] Waited for 197.4039ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-853900
	I0116 03:14:54.745283    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-853900
	I0116 03:14:54.745283    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:54.745283    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:54.745283    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:54.750442    5244 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0116 03:14:54.750442    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:54.751058    5244 round_trippers.go:580]     Audit-Id: 9ae8649b-3ebf-4e5c-b0ec-6814c68a58fd
	I0116 03:14:54.751058    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:54.751058    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:54.751058    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:54.751058    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:54.751058    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:54 GMT
	I0116 03:14:54.751408    5244 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-853900","namespace":"kube-system","uid":"d75db7e3-c171-428f-9c08-f268ce16da31","resourceVersion":"1723","creationTimestamp":"2024-01-16T02:48:09Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aff36fe37a6d6fc8d309826a0f54f93d","kubernetes.io/config.mirror":"aff36fe37a6d6fc8d309826a0f54f93d","kubernetes.io/config.seen":"2024-01-16T02:48:09.211494477Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-16T02:48:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0116 03:14:54.947267    5244 request.go:629] Waited for 194.7178ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:14:54.947702    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes/multinode-853900
	I0116 03:14:54.947702    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:54.947702    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:54.947819    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:54.951827    5244 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0116 03:14:54.952616    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:54.952616    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:54.952616    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:54 GMT
	I0116 03:14:54.952616    5244 round_trippers.go:580]     Audit-Id: 09995c97-38eb-40d1-a36c-dae582009ab6
	I0116 03:14:54.952616    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:54.952616    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:54.952616    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:54.952960    5244 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-01-16T02:48:05Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0116 03:14:54.953648    5244 pod_ready.go:92] pod "kube-scheduler-multinode-853900" in "kube-system" namespace has status "Ready":"True"
	I0116 03:14:54.953648    5244 pod_ready.go:81] duration metric: took 405.7677ms waiting for pod "kube-scheduler-multinode-853900" in "kube-system" namespace to be "Ready" ...
	I0116 03:14:54.953648    5244 pod_ready.go:38] duration metric: took 1.6067873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0116 03:14:54.953753    5244 system_svc.go:44] waiting for kubelet service to be running ....
	I0116 03:14:54.968042    5244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:14:54.988558    5244 system_svc.go:56] duration metric: took 34.8053ms WaitForService to wait for kubelet.
	I0116 03:14:54.988558    5244 kubeadm.go:581] duration metric: took 21.6976787s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0116 03:14:54.988558    5244 node_conditions.go:102] verifying NodePressure condition ...
	I0116 03:14:55.151446    5244 request.go:629] Waited for 162.6928ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.125.182:8443/api/v1/nodes
	I0116 03:14:55.151551    5244 round_trippers.go:463] GET https://172.27.125.182:8443/api/v1/nodes
	I0116 03:14:55.151715    5244 round_trippers.go:469] Request Headers:
	I0116 03:14:55.151715    5244 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0116 03:14:55.151715    5244 round_trippers.go:473]     Accept: application/json, */*
	I0116 03:14:55.156513    5244 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0116 03:14:55.157250    5244 round_trippers.go:577] Response Headers:
	I0116 03:14:55.157250    5244 round_trippers.go:580]     Audit-Id: f8f83448-9bb8-48a4-b385-aacfec5da02b
	I0116 03:14:55.157250    5244 round_trippers.go:580]     Cache-Control: no-cache, private
	I0116 03:14:55.157250    5244 round_trippers.go:580]     Content-Type: application/json
	I0116 03:14:55.157250    5244 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 5757b0c0-dc7a-4e99-99e1-ce2fd5bc1b99
	I0116 03:14:55.157250    5244 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a79ee68e-3359-462b-87ba-c82649477e8b
	I0116 03:14:55.157321    5244 round_trippers.go:580]     Date: Tue, 16 Jan 2024 03:14:55 GMT
	I0116 03:14:55.157727    5244 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2160"},"items":[{"metadata":{"name":"multinode-853900","uid":"9455fff3-37fc-4b47-83ca-df333321b6bf","resourceVersion":"1774","creationTimestamp":"2024-01-16T02:48:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-853900","kubernetes.io/os":"linux","minikube.k8s.io/commit":"880ec6d67bbc7fe8882aaa6543d5a07427c973fd","minikube.k8s.io/name":"multinode-853900","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_16T02_48_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14717 chars]
	I0116 03:14:55.158527    5244 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:14:55.158527    5244 node_conditions.go:123] node cpu capacity is 2
	I0116 03:14:55.158527    5244 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:14:55.158527    5244 node_conditions.go:123] node cpu capacity is 2
	I0116 03:14:55.158527    5244 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0116 03:14:55.158527    5244 node_conditions.go:123] node cpu capacity is 2
	I0116 03:14:55.158527    5244 node_conditions.go:105] duration metric: took 169.968ms to run NodePressure ...
	I0116 03:14:55.158527    5244 start.go:228] waiting for startup goroutines ...
	I0116 03:14:55.158527    5244 start.go:242] writing updated cluster config ...
	I0116 03:14:55.174676    5244 ssh_runner.go:195] Run: rm -f paused
	I0116 03:14:55.340195    5244 start.go:600] kubectl: 1.29.0, cluster: 1.28.4 (minor skew: 1)
	I0116 03:14:55.341236    5244 out.go:177] * Done! kubectl is now configured to use "multinode-853900" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-01-16 03:08:22 UTC, ends at Tue 2024-01-16 03:15:15 UTC. --
	Jan 16 03:09:58 multinode-853900 dockerd[1042]: time="2024-01-16T03:09:58.259506665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 03:09:58 multinode-853900 dockerd[1042]: time="2024-01-16T03:09:58.259704965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:10:01 multinode-853900 cri-dockerd[1244]: time="2024-01-16T03:10:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ac8b326377a230de94680fb522483a74466bd31cc29fd032c5f13cb7865ca544/resolv.conf as [nameserver 172.27.112.1]"
	Jan 16 03:10:01 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:01.970980865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 03:10:01 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:01.971212965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:10:01 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:01.973401765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 03:10:01 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:01.973542365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:10:12 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:12.688818890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 03:10:12 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:12.689866595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:10:12 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:12.690212499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 03:10:12 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:12.690467654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:10:12 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:12.710480397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 03:10:12 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:12.710901358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:10:12 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:12.711095348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 03:10:12 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:12.711248461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:10:13 multinode-853900 cri-dockerd[1244]: time="2024-01-16T03:10:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/86134ffb4b425f3fb6d4f9b222fe242fc2112f263d8f18f1da8b00a988227f1c/resolv.conf as [nameserver 172.27.112.1]"
	Jan 16 03:10:13 multinode-853900 cri-dockerd[1244]: time="2024-01-16T03:10:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bb91423da61cd2319f6856767b07c1e7aaa3aa339c4cd68ec245c9dba3cbe79e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 16 03:10:13 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:13.524831538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 03:10:13 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:13.525444712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:10:13 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:13.525469399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 03:10:13 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:13.525712370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:10:13 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:13.642296344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 16 03:10:13 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:13.642997471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 16 03:10:13 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:13.643123804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 16 03:10:13 multinode-853900 dockerd[1042]: time="2024-01-16T03:10:13.643228048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	164ca98e9992f       8c811b4aec35f                                                                                         5 minutes ago       Running             busybox                   1                   bb91423da61cd       busybox-5bc68d56bd-fp6wc
	aa03c7d6c1a7a       ead0a4a53df89                                                                                         5 minutes ago       Running             coredns                   1                   86134ffb4b425       coredns-5dd5756b68-62jpz
	ddd45e8cc37ae       c7d1297425461                                                                                         5 minutes ago       Running             kindnet-cni               1                   ac8b326377a23       kindnet-x5nvv
	23b6d6beaa689       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       1                   f80ee37894dc7       storage-provisioner
	8ae16f52f18ea       83f6cc407eed8                                                                                         5 minutes ago       Running             kube-proxy                1                   a78c75e660ac6       kube-proxy-tpc2g
	a0e28bd006a85       e3db313c6dbc0                                                                                         5 minutes ago       Running             kube-scheduler            1                   92930d82e54cc       kube-scheduler-multinode-853900
	e6e4df536179c       73deb9a3f7025                                                                                         5 minutes ago       Running             etcd                      0                   31df73680d247       etcd-multinode-853900
	3f098a95f07a0       7fe0e6f37db33                                                                                         5 minutes ago       Running             kube-apiserver            0                   267f493c59c21       kube-apiserver-multinode-853900
	ece3dd47cbb62       d058aa5ab969c                                                                                         5 minutes ago       Running             kube-controller-manager   1                   e2a8eadff8626       kube-controller-manager-multinode-853900
	c4b7f3b3d92db       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   e8afc8bf1589d       busybox-5bc68d56bd-fp6wc
	7c4c2a1e9df5b       ead0a4a53df89                                                                                         26 minutes ago      Exited              coredns                   0                   df918467deeb1       coredns-5dd5756b68-62jpz
	c7157c42967e6       6e38f40d628db                                                                                         26 minutes ago      Exited              storage-provisioner       0                   71976a8048bc5       storage-provisioner
	d0b6d500287e8       kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052              26 minutes ago      Exited              kindnet-cni               0                   b1892deb5d5dd       kindnet-x5nvv
	e4eefc8ffba88       83f6cc407eed8                                                                                         26 minutes ago      Exited              kube-proxy                0                   a5cb81c4b523e       kube-proxy-tpc2g
	7f47011532879       e3db313c6dbc0                                                                                         27 minutes ago      Exited              kube-scheduler            0                   bcdc39931f56e       kube-scheduler-multinode-853900
	f8ce77440648f       d058aa5ab969c                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   7477cf9652147       kube-controller-manager-multinode-853900
	
	
	==> coredns [7c4c2a1e9df5] <==
	[INFO] 10.244.0.3:56016 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000077201s
	[INFO] 10.244.0.3:46500 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088401s
	[INFO] 10.244.0.3:40048 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119302s
	[INFO] 10.244.0.3:60012 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000130501s
	[INFO] 10.244.0.3:53198 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064001s
	[INFO] 10.244.0.3:34162 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070701s
	[INFO] 10.244.0.3:33411 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000589s
	[INFO] 10.244.1.2:45595 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217602s
	[INFO] 10.244.1.2:56102 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000165902s
	[INFO] 10.244.1.2:39624 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000617s
	[INFO] 10.244.1.2:42716 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054901s
	[INFO] 10.244.0.3:42485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000217902s
	[INFO] 10.244.0.3:59644 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000230202s
	[INFO] 10.244.0.3:59058 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118801s
	[INFO] 10.244.0.3:60247 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106301s
	[INFO] 10.244.1.2:51965 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000314404s
	[INFO] 10.244.1.2:38409 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155301s
	[INFO] 10.244.1.2:41179 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116001s
	[INFO] 10.244.1.2:35298 - 5 "PTR IN 1.112.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135802s
	[INFO] 10.244.0.3:37147 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167101s
	[INFO] 10.244.0.3:50056 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000463104s
	[INFO] 10.244.0.3:51075 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000149101s
	[INFO] 10.244.0.3:43165 - 5 "PTR IN 1.112.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000088701s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [aa03c7d6c1a7] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e76cd1f4241fbd336d5e1d56170ae69e8389ff4197cb4bacea4ab86ce4c2ec8f58098e2106677580c06728ae57d9f0250db8f5c40e7a5cff291fc37d7d4dfe8b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60749 - 23784 "HINFO IN 8538715131805840712.552558066430365392. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.035912988s
	
	
	==> describe nodes <==
	Name:               multinode-853900
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-853900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=multinode-853900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_16T02_48_10_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 02:48:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-853900
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:15:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:10:17 +0000   Tue, 16 Jan 2024 02:48:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:10:17 +0000   Tue, 16 Jan 2024 02:48:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:10:17 +0000   Tue, 16 Jan 2024 02:48:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:10:17 +0000   Tue, 16 Jan 2024 03:10:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.125.182
	  Hostname:    multinode-853900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 5c38d91d57d24ae18f014ff7d9d5eddb
	  System UUID:                10054ccc-7b49-694b-9027-8f9af2c15e6e
	  Boot ID:                    e6efeeab-45f8-41ec-8767-35be371029de
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-fp6wc                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-5dd5756b68-62jpz                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-multinode-853900                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m20s
	  kube-system                 kindnet-x5nvv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-multinode-853900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-controller-manager-multinode-853900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-tpc2g                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-multinode-853900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                    kubelet          Node multinode-853900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                    kubelet          Node multinode-853900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                    kubelet          Node multinode-853900 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                    node-controller  Node multinode-853900 event: Registered Node multinode-853900 in Controller
	  Normal  NodeReady                26m                    kubelet          Node multinode-853900 status is now: NodeReady
	  Normal  Starting                 5m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m26s (x8 over 5m26s)  kubelet          Node multinode-853900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s (x8 over 5m26s)  kubelet          Node multinode-853900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s (x7 over 5m26s)  kubelet          Node multinode-853900 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m8s                   node-controller  Node multinode-853900 event: Registered Node multinode-853900 in Controller
	
	
	Name:               multinode-853900-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-853900-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=multinode-853900
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T03_14_32_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:12:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-853900-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:15:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:12:30 +0000   Tue, 16 Jan 2024 03:12:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:12:30 +0000   Tue, 16 Jan 2024 03:12:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:12:30 +0000   Tue, 16 Jan 2024 03:12:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:12:30 +0000   Tue, 16 Jan 2024 03:12:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.125.77
	  Hostname:    multinode-853900-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 22d9fd1bfb574790baf7864eb8f8aa90
	  System UUID:                3b004291-ff12-3445-8f31-f8a19c168043
	  Boot ID:                    7bbf9fc7-cdca-4dd6-b7e0-901e08f7f8c2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-4l75v    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 kindnet-6s9wr               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-proxy-h977r            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 2m53s                  kube-proxy  
	  Normal  Starting                 23m                    kube-proxy  
	  Normal  Starting                 24m                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)      kubelet     Node multinode-853900-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)      kubelet     Node multinode-853900-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)      kubelet     Node multinode-853900-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                    kubelet     Node multinode-853900-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m56s (x2 over 2m56s)  kubelet     Node multinode-853900-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m56s (x2 over 2m56s)  kubelet     Node multinode-853900-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m56s (x2 over 2m56s)  kubelet     Node multinode-853900-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m56s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 2m56s                  kubelet     Starting kubelet.
	  Normal  NodeReady                2m46s                  kubelet     Node multinode-853900-m02 status is now: NodeReady
	
	
	Name:               multinode-853900-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-853900-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=880ec6d67bbc7fe8882aaa6543d5a07427c973fd
	                    minikube.k8s.io/name=multinode-853900
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_16T03_14_32_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Jan 2024 03:14:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-853900-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Jan 2024 03:15:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Jan 2024 03:14:53 +0000   Tue, 16 Jan 2024 03:14:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Jan 2024 03:14:53 +0000   Tue, 16 Jan 2024 03:14:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Jan 2024 03:14:53 +0000   Tue, 16 Jan 2024 03:14:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Jan 2024 03:14:53 +0000   Tue, 16 Jan 2024 03:14:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.125.42
	  Hostname:    multinode-853900-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa4291f621f94246bc799af6a1cc22a0
	  System UUID:                a5e4d224-f5f1-594d-809c-a8eeea3e5bc3
	  Boot ID:                    fdf8434b-6044-452e-a6ff-5044a0104991
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-b8hwf       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-rfglr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m25s                  kube-proxy       
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 42s                    kube-proxy       
	  Normal  Starting                 19m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-853900-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-853900-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-853900-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                19m                    kubelet          Node multinode-853900-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    9m28s (x2 over 9m28s)  kubelet          Node multinode-853900-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m28s (x2 over 9m28s)  kubelet          Node multinode-853900-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m28s (x2 over 9m28s)  kubelet          Node multinode-853900-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m28s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m20s                  kubelet          Node multinode-853900-m03 status is now: NodeReady
	  Normal  Starting                 46s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s (x2 over 46s)      kubelet          Node multinode-853900-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x2 over 46s)      kubelet          Node multinode-853900-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x2 over 46s)      kubelet          Node multinode-853900-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  46s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           42s                    node-controller  Node multinode-853900-m03 event: Registered Node multinode-853900-m03 in Controller
	  Normal  NodeReady                23s                    kubelet          Node multinode-853900-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000011] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +1.275220] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.065462] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.132833] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.818462] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan16 03:09] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.140787] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[ +25.166095] systemd-fstab-generator[962]: Ignoring "noauto" for root device
	[  +0.557304] systemd-fstab-generator[1002]: Ignoring "noauto" for root device
	[  +0.153996] systemd-fstab-generator[1013]: Ignoring "noauto" for root device
	[  +0.183090] systemd-fstab-generator[1026]: Ignoring "noauto" for root device
	[  +1.432536] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.397373] systemd-fstab-generator[1199]: Ignoring "noauto" for root device
	[  +0.165635] systemd-fstab-generator[1210]: Ignoring "noauto" for root device
	[  +0.149755] systemd-fstab-generator[1221]: Ignoring "noauto" for root device
	[  +0.242565] systemd-fstab-generator[1236]: Ignoring "noauto" for root device
	[  +3.610202] systemd-fstab-generator[1455]: Ignoring "noauto" for root device
	[  +0.812866] kauditd_printk_skb: 29 callbacks suppressed
	[Jan16 03:10] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [e6e4df536179] <==
	{"level":"info","ts":"2024-01-16T03:14:32.777163Z","caller":"traceutil/trace.go:171","msg":"trace[1184380920] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:2118; }","duration":"240.981249ms","start":"2024-01-16T03:14:32.536172Z","end":"2024-01-16T03:14:32.777153Z","steps":["trace[1184380920] 'agreement among raft nodes before linearized reading'  (duration: 240.792346ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:14:32.777255Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:14:32.451068Z","time spent":"325.456439ms","remote":"127.0.0.1:60836","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-853900-m02\" mod_revision:1970 > success:<request_put:<key:\"/registry/minions/multinode-853900-m02\" value_size:3438 >> failure:<request_range:<key:\"/registry/minions/multinode-853900-m02\" > >"}
	{"level":"warn","ts":"2024-01-16T03:14:32.777483Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.490806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T03:14:32.777511Z","caller":"traceutil/trace.go:171","msg":"trace[959798769] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2118; }","duration":"221.518206ms","start":"2024-01-16T03:14:32.555984Z","end":"2024-01-16T03:14:32.777503Z","steps":["trace[959798769] 'agreement among raft nodes before linearized reading'  (duration: 221.477805ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:14:35.043454Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"878.45308ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3643004249508488441 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-853900-m03.17aab5699f269e65\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-853900-m03.17aab5699f269e65\" value_size:629 lease:3643004249508488348 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-01-16T03:14:35.043604Z","caller":"traceutil/trace.go:171","msg":"trace[1709520582] linearizableReadLoop","detail":"{readStateIndex:2456; appliedIndex:2454; }","duration":"738.733263ms","start":"2024-01-16T03:14:34.304859Z","end":"2024-01-16T03:14:35.043593Z","steps":["trace[1709520582] 'read index received'  (duration: 34.201µs)","trace[1709520582] 'applied index is now lower than readState.Index'  (duration: 738.697962ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-16T03:14:35.043685Z","caller":"traceutil/trace.go:171","msg":"trace[519280933] transaction","detail":"{read_only:false; response_revision:2128; number_of_response:1; }","duration":"880.598121ms","start":"2024-01-16T03:14:34.163079Z","end":"2024-01-16T03:14:35.043677Z","steps":["trace[519280933] 'compare'  (duration: 877.852169ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:14:35.043721Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:14:34.163056Z","time spent":"880.647621ms","remote":"127.0.0.1:60812","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":709,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/multinode-853900-m03.17aab5699f269e65\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-853900-m03.17aab5699f269e65\" value_size:629 lease:3643004249508488348 >> failure:<>"}
	{"level":"info","ts":"2024-01-16T03:14:35.043987Z","caller":"traceutil/trace.go:171","msg":"trace[1658325002] transaction","detail":"{read_only:false; response_revision:2129; number_of_response:1; }","duration":"833.854946ms","start":"2024-01-16T03:14:34.210124Z","end":"2024-01-16T03:14:35.043979Z","steps":["trace[1658325002] 'process raft request'  (duration: 833.420037ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:14:35.044218Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:14:34.210101Z","time spent":"834.081149ms","remote":"127.0.0.1:60836","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3361,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-853900-m03\" mod_revision:2119 > success:<request_put:<key:\"/registry/minions/multinode-853900-m03\" value_size:3315 >> failure:<request_range:<key:\"/registry/minions/multinode-853900-m03\" > >"}
	{"level":"warn","ts":"2024-01-16T03:14:35.044428Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"739.733982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:3 size:11627"}
	{"level":"info","ts":"2024-01-16T03:14:35.044514Z","caller":"traceutil/trace.go:171","msg":"trace[2079416928] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:3; response_revision:2129; }","duration":"739.822684ms","start":"2024-01-16T03:14:34.304683Z","end":"2024-01-16T03:14:35.044506Z","steps":["trace[2079416928] 'agreement among raft nodes before linearized reading'  (duration: 739.702381ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:14:35.04454Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:14:34.304665Z","time spent":"739.867084ms","remote":"127.0.0.1:60836","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":3,"response size":11650,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" "}
	{"level":"warn","ts":"2024-01-16T03:14:35.04473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"704.630625ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-853900-m03\" ","response":"range_response_count:1 size:3376"}
	{"level":"info","ts":"2024-01-16T03:14:35.044882Z","caller":"traceutil/trace.go:171","msg":"trace[417129177] range","detail":"{range_begin:/registry/minions/multinode-853900-m03; range_end:; response_count:1; response_revision:2129; }","duration":"704.777928ms","start":"2024-01-16T03:14:34.340093Z","end":"2024-01-16T03:14:35.044871Z","steps":["trace[417129177] 'agreement among raft nodes before linearized reading'  (duration: 704.613025ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:14:35.044924Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:14:34.340077Z","time spent":"704.834029ms","remote":"127.0.0.1:60836","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3399,"request content":"key:\"/registry/minions/multinode-853900-m03\" "}
	{"level":"warn","ts":"2024-01-16T03:14:35.045259Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"489.040086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T03:14:35.045284Z","caller":"traceutil/trace.go:171","msg":"trace[151202073] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2129; }","duration":"489.066787ms","start":"2024-01-16T03:14:34.55621Z","end":"2024-01-16T03:14:35.045277Z","steps":["trace[151202073] 'agreement among raft nodes before linearized reading'  (duration: 489.022686ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:14:35.045304Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:14:34.556179Z","time spent":"489.119887ms","remote":"127.0.0.1:60784","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-01-16T03:14:35.045411Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"604.279045ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-16T03:14:35.045433Z","caller":"traceutil/trace.go:171","msg":"trace[651908044] range","detail":"{range_begin:/registry/csidrivers/; range_end:/registry/csidrivers0; response_count:0; response_revision:2129; }","duration":"604.303445ms","start":"2024-01-16T03:14:34.441124Z","end":"2024-01-16T03:14:35.045427Z","steps":["trace[651908044] 'agreement among raft nodes before linearized reading'  (duration: 604.264645ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:14:35.04545Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:14:34.441109Z","time spent":"604.336346ms","remote":"127.0.0.1:60886","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":0,"response size":28,"request content":"key:\"/registry/csidrivers/\" range_end:\"/registry/csidrivers0\" count_only:true "}
	{"level":"warn","ts":"2024-01-16T03:14:35.045551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"686.762291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-01-16T03:14:35.045571Z","caller":"traceutil/trace.go:171","msg":"trace[1805765448] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2129; }","duration":"686.782491ms","start":"2024-01-16T03:14:34.358783Z","end":"2024-01-16T03:14:35.045565Z","steps":["trace[1805765448] 'agreement among raft nodes before linearized reading'  (duration: 686.74159ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-16T03:14:35.045593Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-16T03:14:34.358683Z","time spent":"686.902993ms","remote":"127.0.0.1:60832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1140,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	
	
	==> kernel <==
	 03:15:16 up 7 min,  0 users,  load average: 0.51, 0.64, 0.32
	Linux multinode-853900 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [d0b6d500287e] <==
	I0116 03:06:04.043613       1 main.go:250] Node multinode-853900-m03 has CIDR [10.244.3.0/24] 
	I0116 03:06:14.059924       1 main.go:223] Handling node with IPs: map[172.27.112.69:{}]
	I0116 03:06:14.060036       1 main.go:227] handling current node
	I0116 03:06:14.060052       1 main.go:223] Handling node with IPs: map[172.27.122.78:{}]
	I0116 03:06:14.060064       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 03:06:14.060229       1 main.go:223] Handling node with IPs: map[172.27.116.8:{}]
	I0116 03:06:14.060262       1 main.go:250] Node multinode-853900-m03 has CIDR [10.244.3.0/24] 
	I0116 03:06:24.077938       1 main.go:223] Handling node with IPs: map[172.27.112.69:{}]
	I0116 03:06:24.078123       1 main.go:227] handling current node
	I0116 03:06:24.078145       1 main.go:223] Handling node with IPs: map[172.27.122.78:{}]
	I0116 03:06:24.078157       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 03:06:24.078600       1 main.go:223] Handling node with IPs: map[172.27.116.8:{}]
	I0116 03:06:24.078625       1 main.go:250] Node multinode-853900-m03 has CIDR [10.244.3.0/24] 
	I0116 03:06:34.096397       1 main.go:223] Handling node with IPs: map[172.27.112.69:{}]
	I0116 03:06:34.096517       1 main.go:227] handling current node
	I0116 03:06:34.096533       1 main.go:223] Handling node with IPs: map[172.27.122.78:{}]
	I0116 03:06:34.096541       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 03:06:34.096682       1 main.go:223] Handling node with IPs: map[172.27.116.8:{}]
	I0116 03:06:34.096715       1 main.go:250] Node multinode-853900-m03 has CIDR [10.244.3.0/24] 
	I0116 03:06:44.113533       1 main.go:223] Handling node with IPs: map[172.27.112.69:{}]
	I0116 03:06:44.113582       1 main.go:227] handling current node
	I0116 03:06:44.113596       1 main.go:223] Handling node with IPs: map[172.27.122.78:{}]
	I0116 03:06:44.113603       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 03:06:44.113725       1 main.go:223] Handling node with IPs: map[172.27.116.8:{}]
	I0116 03:06:44.113755       1 main.go:250] Node multinode-853900-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ddd45e8cc37a] <==
	I0116 03:14:35.064668       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.27.125.42 Flags: [] Table: 0} 
	I0116 03:14:45.072081       1 main.go:223] Handling node with IPs: map[172.27.125.182:{}]
	I0116 03:14:45.072343       1 main.go:227] handling current node
	I0116 03:14:45.072358       1 main.go:223] Handling node with IPs: map[172.27.125.77:{}]
	I0116 03:14:45.072367       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 03:14:45.072478       1 main.go:223] Handling node with IPs: map[172.27.125.42:{}]
	I0116 03:14:45.072507       1 main.go:250] Node multinode-853900-m03 has CIDR [10.244.2.0/24] 
	I0116 03:14:55.081375       1 main.go:223] Handling node with IPs: map[172.27.125.182:{}]
	I0116 03:14:55.081448       1 main.go:227] handling current node
	I0116 03:14:55.081461       1 main.go:223] Handling node with IPs: map[172.27.125.77:{}]
	I0116 03:14:55.081469       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 03:14:55.082376       1 main.go:223] Handling node with IPs: map[172.27.125.42:{}]
	I0116 03:14:55.082488       1 main.go:250] Node multinode-853900-m03 has CIDR [10.244.2.0/24] 
	I0116 03:15:05.093135       1 main.go:223] Handling node with IPs: map[172.27.125.182:{}]
	I0116 03:15:05.093177       1 main.go:227] handling current node
	I0116 03:15:05.093190       1 main.go:223] Handling node with IPs: map[172.27.125.77:{}]
	I0116 03:15:05.093197       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 03:15:05.093454       1 main.go:223] Handling node with IPs: map[172.27.125.42:{}]
	I0116 03:15:05.093536       1 main.go:250] Node multinode-853900-m03 has CIDR [10.244.2.0/24] 
	I0116 03:15:15.100924       1 main.go:223] Handling node with IPs: map[172.27.125.182:{}]
	I0116 03:15:15.100969       1 main.go:227] handling current node
	I0116 03:15:15.100981       1 main.go:223] Handling node with IPs: map[172.27.125.77:{}]
	I0116 03:15:15.100989       1 main.go:250] Node multinode-853900-m02 has CIDR [10.244.1.0/24] 
	I0116 03:15:15.101534       1 main.go:223] Handling node with IPs: map[172.27.125.42:{}]
	I0116 03:15:15.101603       1 main.go:250] Node multinode-853900-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [3f098a95f07a] <==
	I0116 03:09:57.630114       1 controller.go:624] quota admission added evaluator for: endpoints
	I0116 03:09:57.638500       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0116 03:09:59.447430       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0116 03:09:59.625153       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0116 03:09:59.638131       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0116 03:09:59.732218       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0116 03:09:59.740943       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0116 03:14:35.046947       1 trace.go:236] Trace[1172850504]: "Create" accept:application/vnd.kubernetes.protobuf, */*,audit-id:31bf90a7-d86c-4709-95a3-7f472b5c2d1a,client:172.27.125.182,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/default/events,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:node-controller,verb:POST (16-Jan-2024 03:14:34.161) (total time: 885ms):
	Trace[1172850504]: ["Create etcd3" audit-id:31bf90a7-d86c-4709-95a3-7f472b5c2d1a,key:/events/default/multinode-853900-m03.17aab5699f269e65,type:*core.Event,resource:events 884ms (03:14:34.162)
	Trace[1172850504]:  ---"Txn call succeeded" 884ms (03:14:35.046)]
	Trace[1172850504]: [885.26901ms] [885.26901ms] END
	I0116 03:14:35.048034       1 trace.go:236] Trace[1190271762]: "Get" accept:application/json, */*,audit-id:0b3327f9-3067-46c2-bc76-2eb005235eb6,client:172.27.125.182,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (16-Jan-2024 03:14:34.357) (total time: 690ms):
	Trace[1190271762]: ---"About to write a response" 689ms (03:14:35.047)
	Trace[1190271762]: [690.011752ms] [690.011752ms] END
	I0116 03:14:35.048858       1 trace.go:236] Trace[1809545818]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:a75d9697-9c8d-4284-89eb-0c982d0e9a0f,client:172.27.125.182,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/multinode-853900-m03,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:node-controller,verb:PATCH (16-Jan-2024 03:14:34.205) (total time: 842ms):
	Trace[1809545818]: ["GuaranteedUpdate etcd3" audit-id:a75d9697-9c8d-4284-89eb-0c982d0e9a0f,key:/minions/multinode-853900-m03,type:*core.Node,resource:nodes 842ms (03:14:34.206)
	Trace[1809545818]:  ---"Txn call completed" 839ms (03:14:35.048)]
	Trace[1809545818]: ---"Object stored in database" 839ms (03:14:35.048)
	Trace[1809545818]: [842.933518ms] [842.933518ms] END
	I0116 03:14:35.050653       1 trace.go:236] Trace[806821856]: "List" accept:application/json, */*,audit-id:46220dcd-58d2-47f5-a3e3-cccfbcd75e43,client:172.27.125.182,protocol:HTTP/2.0,resource:nodes,scope:cluster,url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (16-Jan-2024 03:14:34.303) (total time: 747ms):
	Trace[806821856]: ["List(recursive=true) etcd3" audit-id:46220dcd-58d2-47f5-a3e3-cccfbcd75e43,key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: 746ms (03:14:34.303)]
	Trace[806821856]: [747.003122ms] [747.003122ms] END
	I0116 03:14:35.053002       1 trace.go:236] Trace[1772742838]: "Get" accept:application/json, */*,audit-id:b3a1a401-2449-48c3-9039-62527a0f39d0,client:172.27.112.1,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/multinode-853900-m03,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:GET (16-Jan-2024 03:14:34.339) (total time: 713ms):
	Trace[1772742838]: ---"About to write a response" 712ms (03:14:35.051)
	Trace[1772742838]: [713.682699ms] [713.682699ms] END
	
	
	==> kube-controller-manager [ece3dd47cbb6] <==
	I0116 03:12:16.393005       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-4l75v"
	I0116 03:12:16.406606       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="27.546084ms"
	I0116 03:12:16.420695       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="13.465601ms"
	I0116 03:12:16.420978       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="122.017µs"
	I0116 03:12:20.803830       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-853900-m02\" does not exist"
	I0116 03:12:20.804812       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-9t8fh" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-9t8fh"
	I0116 03:12:20.813657       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-853900-m02" podCIDRs=["10.244.1.0/24"]
	I0116 03:12:21.664064       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="102.013µs"
	I0116 03:12:30.295682       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853900-m02"
	I0116 03:12:30.320230       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="60.406µs"
	I0116 03:12:34.133334       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-9t8fh" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-9t8fh"
	I0116 03:12:36.780540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="97.71µs"
	I0116 03:12:37.115435       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="183.719µs"
	I0116 03:12:37.119987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="109.811µs"
	I0116 03:12:38.618802       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="81.208µs"
	I0116 03:12:38.639184       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.504µs"
	I0116 03:12:40.651620       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.797869ms"
	I0116 03:12:40.652478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.704µs"
	I0116 03:14:29.131418       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853900-m02"
	I0116 03:14:29.158436       1 event.go:307] "Event occurred" object="multinode-853900-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-853900-m03 event: Removing Node multinode-853900-m03 from Controller"
	I0116 03:14:30.776344       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853900-m02"
	I0116 03:14:30.776908       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-853900-m03\" does not exist"
	I0116 03:14:30.787264       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-853900-m03" podCIDRs=["10.244.2.0/24"]
	I0116 03:14:34.159693       1 event.go:307] "Event occurred" object="multinode-853900-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-853900-m03 event: Registered Node multinode-853900-m03 in Controller"
	I0116 03:14:53.140605       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853900-m02"
	
	
	==> kube-controller-manager [f8ce77440648] <==
	I0116 02:51:55.866270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="51.615325ms"
	I0116 02:51:55.886103       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="19.750239ms"
	I0116 02:51:55.915560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.317355ms"
	I0116 02:51:55.915940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="165.602µs"
	I0116 02:51:58.125598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="10.719425ms"
	I0116 02:51:58.125666       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.701µs"
	I0116 02:51:58.865374       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.040006ms"
	I0116 02:51:58.866717       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="65.601µs"
	I0116 02:55:40.319115       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853900-m02"
	I0116 02:55:40.322073       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-853900-m03\" does not exist"
	I0116 02:55:40.336513       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-853900-m03" podCIDRs=["10.244.2.0/24"]
	I0116 02:55:40.364220       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-b8hwf"
	I0116 02:55:40.371547       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rfglr"
	I0116 02:55:40.664026       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-853900-m03"
	I0116 02:55:40.664511       1 event.go:307] "Event occurred" object="multinode-853900-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-853900-m03 event: Registered Node multinode-853900-m03 in Controller"
	I0116 02:55:56.951662       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853900-m02"
	I0116 03:03:30.794028       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853900-m02"
	I0116 03:03:30.795897       1 event.go:307] "Event occurred" object="multinode-853900-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-853900-m03 status is now: NodeNotReady"
	I0116 03:03:30.809412       1 event.go:307] "Event occurred" object="kube-system/kindnet-b8hwf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0116 03:03:30.825247       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-rfglr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0116 03:05:47.261911       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853900-m02"
	I0116 03:05:48.560691       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-853900-m03\" does not exist"
	I0116 03:05:48.562420       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853900-m02"
	I0116 03:05:48.574586       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-853900-m03" podCIDRs=["10.244.3.0/24"]
	I0116 03:05:56.388627       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-853900-m02"
	
	
	==> kube-proxy [8ae16f52f18e] <==
	I0116 03:09:58.089732       1 server_others.go:69] "Using iptables proxy"
	I0116 03:09:58.182034       1 node.go:141] Successfully retrieved node IP: 172.27.125.182
	I0116 03:09:58.761359       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 03:09:58.761717       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 03:09:58.767412       1 server_others.go:152] "Using iptables Proxier"
	I0116 03:09:58.768887       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 03:09:58.769297       1 server.go:846] "Version info" version="v1.28.4"
	I0116 03:09:58.769738       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:09:58.775662       1 config.go:188] "Starting service config controller"
	I0116 03:09:58.775874       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 03:09:58.776130       1 config.go:97] "Starting endpoint slice config controller"
	I0116 03:09:58.776231       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 03:09:58.777487       1 config.go:315] "Starting node config controller"
	I0116 03:09:58.777826       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 03:09:58.898306       1 shared_informer.go:318] Caches are synced for node config
	I0116 03:09:58.898455       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 03:09:58.904861       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [e4eefc8ffba8] <==
	I0116 02:48:22.633014       1 server_others.go:69] "Using iptables proxy"
	I0116 02:48:22.649027       1 node.go:141] Successfully retrieved node IP: 172.27.112.69
	I0116 02:48:22.715154       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0116 02:48:22.715510       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0116 02:48:22.719363       1 server_others.go:152] "Using iptables Proxier"
	I0116 02:48:22.719544       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0116 02:48:22.720518       1 server.go:846] "Version info" version="v1.28.4"
	I0116 02:48:22.720540       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 02:48:22.721403       1 config.go:188] "Starting service config controller"
	I0116 02:48:22.721551       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0116 02:48:22.721582       1 config.go:97] "Starting endpoint slice config controller"
	I0116 02:48:22.721589       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0116 02:48:22.725929       1 config.go:315] "Starting node config controller"
	I0116 02:48:22.726027       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0116 02:48:22.822773       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0116 02:48:22.822835       1 shared_informer.go:318] Caches are synced for service config
	I0116 02:48:22.826164       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7f4701153287] <==
	W0116 02:48:06.437001       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0116 02:48:06.437032       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0116 02:48:06.482504       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0116 02:48:06.483065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0116 02:48:06.509167       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0116 02:48:06.509467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0116 02:48:06.557817       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0116 02:48:06.557847       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0116 02:48:06.749156       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0116 02:48:06.749269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0116 02:48:06.780250       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0116 02:48:06.780472       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0116 02:48:06.793910       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0116 02:48:06.794128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0116 02:48:06.797405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0116 02:48:06.797622       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0116 02:48:06.893978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0116 02:48:06.894377       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0116 02:48:07.103507       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0116 02:48:07.103541       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0116 02:48:08.799264       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 03:06:48.709568       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0116 03:06:48.711556       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0116 03:06:48.711670       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0116 03:06:48.712686       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a0e28bd006a8] <==
	I0116 03:09:54.118883       1 serving.go:348] Generated self-signed cert in-memory
	W0116 03:09:56.316865       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0116 03:09:56.316899       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0116 03:09:56.316912       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0116 03:09:56.316920       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0116 03:09:56.367361       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0116 03:09:56.367389       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0116 03:09:56.377589       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0116 03:09:56.379497       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0116 03:09:56.380082       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0116 03:09:56.380140       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0116 03:09:56.480916       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-16 03:08:22 UTC, ends at Tue 2024-01-16 03:15:16 UTC. --
	Jan 16 03:10:09 multinode-853900 kubelet[1461]: E0116 03:10:09.593984    1461 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5dd5756b68-62jpz" podUID="c028c1eb-0071-40bf-a163-6f71a10dc945"
	Jan 16 03:10:13 multinode-853900 kubelet[1461]: I0116 03:10:13.316357    1461 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86134ffb4b425f3fb6d4f9b222fe242fc2112f263d8f18f1da8b00a988227f1c"
	Jan 16 03:10:13 multinode-853900 kubelet[1461]: I0116 03:10:13.418393    1461 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb91423da61cd2319f6856767b07c1e7aaa3aa339c4cd68ec245c9dba3cbe79e"
	Jan 16 03:10:50 multinode-853900 kubelet[1461]: I0116 03:10:50.595464    1461 scope.go:117] "RemoveContainer" containerID="dcdfa712e694d2f98d01a3ccbc2349b0e7efb64746bbd8b98bbf616966b606ee"
	Jan 16 03:10:50 multinode-853900 kubelet[1461]: E0116 03:10:50.627244    1461 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:10:50 multinode-853900 kubelet[1461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:10:50 multinode-853900 kubelet[1461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:10:50 multinode-853900 kubelet[1461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:10:50 multinode-853900 kubelet[1461]: I0116 03:10:50.632255    1461 scope.go:117] "RemoveContainer" containerID="e829a48e9f669d448cfcffc669aa82dda3df2019316862c93f095d62535df965"
	Jan 16 03:11:50 multinode-853900 kubelet[1461]: E0116 03:11:50.628148    1461 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:11:50 multinode-853900 kubelet[1461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:11:50 multinode-853900 kubelet[1461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:11:50 multinode-853900 kubelet[1461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:12:50 multinode-853900 kubelet[1461]: E0116 03:12:50.621649    1461 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:12:50 multinode-853900 kubelet[1461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:12:50 multinode-853900 kubelet[1461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:12:50 multinode-853900 kubelet[1461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:13:50 multinode-853900 kubelet[1461]: E0116 03:13:50.620591    1461 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:13:50 multinode-853900 kubelet[1461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:13:50 multinode-853900 kubelet[1461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:13:50 multinode-853900 kubelet[1461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 16 03:14:50 multinode-853900 kubelet[1461]: E0116 03:14:50.620965    1461 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 16 03:14:50 multinode-853900 kubelet[1461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 16 03:14:50 multinode-853900 kubelet[1461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 16 03:14:50 multinode-853900 kubelet[1461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:15:08.045828    3508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-853900 -n multinode-853900
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-853900 -n multinode-853900: (12.2452875s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-853900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (537.72s)

                                                
                                    
x
+
TestScheduledStopWindows (279.99s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-033200 --memory=2048 --driver=hyperv
E0116 03:28:13.033604   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 03:28:29.850328   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 03:28:46.626737   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p scheduled-stop-033200 --memory=2048 --driver=hyperv: exit status 90 (3m26.2907011s)

                                                
                                                
-- stdout --
	* [scheduled-stop-033200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node scheduled-stop-033200 in cluster scheduled-stop-033200
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:25:33.086692   12888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2024-01-16 03:26:35 UTC, ends at Tue 2024-01-16 03:28:59 UTC. --
	Jan 16 03:27:26 scheduled-stop-033200 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:26.895019357Z" level=info msg="Starting up"
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:26.896047867Z" level=info msg="containerd not running, starting managed containerd"
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:26.897156077Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.933154404Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.961341460Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.961463261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.964063485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.964389587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.964729191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.964865392Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.964973293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.965125194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.965237495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.965424397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.965979302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.966081903Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.966100403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.966396506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.966653308Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.966767509Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.966916410Z" level=info msg="metadata content store policy set" policy=shared
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.978797218Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.978941420Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.978966520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979104121Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979302923Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979328223Z" level=info msg="NRI interface is disabled by configuration."
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979347323Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979582825Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979613826Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979632326Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979651226Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979669926Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979692226Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979708727Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979724427Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979741527Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979781927Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979866228Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979885728Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979991929Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.980990238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981062839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981092939Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981125239Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981224040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981265241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981282841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981300341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981321141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981358942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981374642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981388742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981414242Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981599544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981643644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981670044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981688645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981706645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981723145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981738745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981753545Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981771645Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981788046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981800546Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.982201849Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.982539552Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.982685454Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.982713954Z" level=info msg="containerd successfully booted in 0.052496s"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.027239243Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.040530456Z" level=info msg="Loading containers: start."
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.257659105Z" level=info msg="Loading containers: done."
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.275077353Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.275102554Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.275111054Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.275117654Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.275171554Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.275323056Z" level=info msg="Daemon has completed initialization"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.327451499Z" level=info msg="API listen on [::]:2376"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.327601601Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 16 03:27:27 scheduled-stop-033200 systemd[1]: Started Docker Application Container Engine.
	Jan 16 03:27:58 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:58.048026360Z" level=info msg="Processing signal 'terminated'"
	Jan 16 03:27:58 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:58.049736960Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 16 03:27:58 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:58.050218760Z" level=info msg="Daemon shutdown complete"
	Jan 16 03:27:58 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:58.051146460Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 16 03:27:58 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:58.051283960Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 16 03:27:58 scheduled-stop-033200 systemd[1]: Stopping Docker Application Container Engine...
	Jan 16 03:27:59 scheduled-stop-033200 systemd[1]: docker.service: Succeeded.
	Jan 16 03:27:59 scheduled-stop-033200 systemd[1]: Stopped Docker Application Container Engine.
	Jan 16 03:27:59 scheduled-stop-033200 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:27:59 scheduled-stop-033200 dockerd[1013]: time="2024-01-16T03:27:59.126283560Z" level=info msg="Starting up"
	Jan 16 03:28:59 scheduled-stop-033200 dockerd[1013]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 16 03:28:59 scheduled-stop-033200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 16 03:28:59 scheduled-stop-033200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 16 03:28:59 scheduled-stop-033200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 90

                                                
                                                
-- stdout --
	* [scheduled-stop-033200] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node scheduled-stop-033200 in cluster scheduled-stop-033200
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:25:33.086692   12888 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	-- Journal begins at Tue 2024-01-16 03:26:35 UTC, ends at Tue 2024-01-16 03:28:59 UTC. --
	Jan 16 03:27:26 scheduled-stop-033200 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:26.895019357Z" level=info msg="Starting up"
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:26.896047867Z" level=info msg="containerd not running, starting managed containerd"
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:26.897156077Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.933154404Z" level=info msg="starting containerd" revision=64b8a811b07ba6288238eefc14d898ee0b5b99ba version=v1.7.11
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.961341460Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.961463261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.964063485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.57\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.964389587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.964729191Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.964865392Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.964973293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.965125194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.965237495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.965424397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.965979302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.966081903Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.966100403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.966396506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.966653308Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.966767509Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.966916410Z" level=info msg="metadata content store policy set" policy=shared
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.978797218Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.978941420Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.978966520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979104121Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979302923Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979328223Z" level=info msg="NRI interface is disabled by configuration."
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979347323Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979582825Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979613826Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979632326Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979651226Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979669926Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979692226Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979708727Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979724427Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979741527Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979781927Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979866228Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979885728Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.979991929Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.980990238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981062839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981092939Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981125239Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981224040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981265241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981282841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981300341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981321141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981358942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981374642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981388742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981414242Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981599544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981643644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981670044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981688645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981706645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981723145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981738745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981753545Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981771645Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981788046Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.981800546Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.982201849Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.982539552Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.982685454Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jan 16 03:27:26 scheduled-stop-033200 dockerd[684]: time="2024-01-16T03:27:26.982713954Z" level=info msg="containerd successfully booted in 0.052496s"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.027239243Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.040530456Z" level=info msg="Loading containers: start."
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.257659105Z" level=info msg="Loading containers: done."
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.275077353Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.275102554Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.275111054Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.275117654Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.275171554Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.275323056Z" level=info msg="Daemon has completed initialization"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.327451499Z" level=info msg="API listen on [::]:2376"
	Jan 16 03:27:27 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:27.327601601Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 16 03:27:27 scheduled-stop-033200 systemd[1]: Started Docker Application Container Engine.
	Jan 16 03:27:58 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:58.048026360Z" level=info msg="Processing signal 'terminated'"
	Jan 16 03:27:58 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:58.049736960Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 16 03:27:58 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:58.050218760Z" level=info msg="Daemon shutdown complete"
	Jan 16 03:27:58 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:58.051146460Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jan 16 03:27:58 scheduled-stop-033200 dockerd[678]: time="2024-01-16T03:27:58.051283960Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jan 16 03:27:58 scheduled-stop-033200 systemd[1]: Stopping Docker Application Container Engine...
	Jan 16 03:27:59 scheduled-stop-033200 systemd[1]: docker.service: Succeeded.
	Jan 16 03:27:59 scheduled-stop-033200 systemd[1]: Stopped Docker Application Container Engine.
	Jan 16 03:27:59 scheduled-stop-033200 systemd[1]: Starting Docker Application Container Engine...
	Jan 16 03:27:59 scheduled-stop-033200 dockerd[1013]: time="2024-01-16T03:27:59.126283560Z" level=info msg="Starting up"
	Jan 16 03:28:59 scheduled-stop-033200 dockerd[1013]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jan 16 03:28:59 scheduled-stop-033200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jan 16 03:28:59 scheduled-stop-033200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jan 16 03:28:59 scheduled-stop-033200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestScheduledStopWindows FAILED at 2024-01-16 03:28:59.2496093 +0000 UTC m=+6753.190836101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-033200 -n scheduled-stop-033200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-033200 -n scheduled-stop-033200: exit status 6 (11.9715045s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:28:59.374998    5844 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0116 03:29:11.149396    5844 status.go:415] kubeconfig endpoint: extract IP: "scheduled-stop-033200" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "scheduled-stop-033200" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "scheduled-stop-033200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-033200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-033200: (1m1.7183788s)
--- FAIL: TestScheduledStopWindows (279.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (302.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-175600 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-175600 --driver=hyperv: exit status 1 (4m59.7588861s)

                                                
                                                
-- stdout --
	* [NoKubernetes-175600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node NoKubernetes-175600 in cluster NoKubernetes-175600
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:30:13.397034    9720 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-175600 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-175600 -n NoKubernetes-175600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-175600 -n NoKubernetes-175600: exit status 7 (2.580121s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:35:13.165948   13380 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-175600" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (302.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10800.542s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-700700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperv
E0116 04:35:29.612845   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
E0116 04:35:29.628600   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
E0116 04:35:29.644013   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
E0116 04:35:29.675037   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
E0116 04:35:29.723523   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
E0116 04:35:29.817531   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
E0116 04:35:29.988709   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
E0116 04:35:30.323445   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
E0116 04:35:30.973994   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
E0116 04:35:32.267225   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
E0116 04:35:34.841136   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
E0116 04:35:37.688780   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-076500\client.crt: The system cannot find the path specified.
E0116 04:35:39.967291   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
E0116 04:35:50.222161   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
E0116 04:36:10.710848   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-700700\client.crt: The system cannot find the path specified.
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (48m29s)
	TestNetworkPlugins/group (48m29s)
	TestNetworkPlugins/group/calico (12m44s)
	TestNetworkPlugins/group/custom-flannel (11m48s)
	TestNetworkPlugins/group/enable-default-cni (57s)
	TestNetworkPlugins/group/enable-default-cni/Start (57s)
	TestNetworkPlugins/group/false (3m43s)
	TestNetworkPlugins/group/false/Start (3m43s)
	TestStartStop (1h1m29s)
	TestStartStop/group (1h1m29s)

                                                
                                                
goroutine 3037 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2259 +0x3b9
created by time.goFunc
	/usr/local/go/src/time/sleep.go:176 +0x2d

                                                
                                                
goroutine 1 [chan receive, 41 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc0005a5040, 0xc00087db80)
	/usr/local/go/src/testing/testing.go:1601 +0x138
testing.runTests(0xc000709360?, {0x4f16d80, 0x2a, 0x2a}, {0xc00087dbe8?, 0xd2bfe5?, 0x4f38a20?})
	/usr/local/go/src/testing/testing.go:2052 +0x445
testing.(*M).Run(0xc000709360)
	/usr/local/go/src/testing/testing.go:1925 +0x636
k8s.io/minikube/test/integration.TestMain(0xc00009bef0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x88
main.main()
	_testmain.go:131 +0x1c6

                                                
                                                
goroutine 5 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0001d1200)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1841 [chan receive, 62 minutes]:
testing.(*T).Run(0xc00234c9c0, {0x2d6eb77?, 0x314c4fa629f4?}, 0x37c7ca0)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestStartStop(0xc00234c820?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00234c9c0, 0x37c7ac8)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 28 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1157 +0x111
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 27
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.110.1/klog.go:1153 +0x171

                                                
                                                
goroutine 1974 [chan receive, 4 minutes]:
testing.(*T).Run(0xc00261d040, {0x2d6eb7c?, 0x3c13408?}, 0xc0023458f0)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00261d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5f0
testing.tRunner(0xc00261d040, 0xc0001d0c80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1967
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2863 [syscall, 4 minutes, locked to thread]:
syscall.SyscallN(0x7ffc77ee4de0?, {0xc00235bba8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0xa?, 0xc0026d5cc0?, 0xc0026d5bb0?, 0xc0026d5ce0?, 0x100c0026d5ca8?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc00085a458?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc002640a80)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00204eb00)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc0021a6680?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc0021a6680, 0xc00204eb00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0021a6680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0021a6680, 0xc0023458f0)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1974
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 190 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b4f500, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 161
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 189 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001fd8a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 161
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2926 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c3c620, 0xc000180240}, 0xc0020d1f50, 0xc002d30f58?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c3c620, 0xc000180240}, 0x1?, 0x1?, 0xc0020d1fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c3c620?, 0xc000180240?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0020d1fd0?, 0xdfdfc7?, 0xc002622d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2934
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 73 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000b4f4d0, 0x3c)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3c15ae0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001fd8960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b4f500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002111f90?, {0x3c19e80, 0xc0021fd4d0}, 0x1, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0xcb821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc002111fd0?, 0xdfdfc7?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 190
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1972 [chan receive, 48 minutes]:
testing.(*testContext).waitParallel(0xc000967130)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00234da00)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00234da00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00234da00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc00234da00, 0xc0001d0b00)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1967
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2194 [chan receive, 35 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0025c0580, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2183
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 1179 [chan send, 151 minutes]:
os/exec.(*Cmd).watchCtx(0xc002a7e2c0, 0xc002a2c540)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 1178
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 74 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c3c620, 0xc000180240}, 0xc000843f50, 0xc000b51138?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c3c620, 0xc000180240}, 0x1?, 0x1?, 0xc000843fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c3c620?, 0xc000180240?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000843fd0?, 0xdfdfc7?, 0xc000866580?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 190
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 75 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 74
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 687 [IO wait, 163 minutes]:
internal/poll.runtime_pollWait(0x1c361c7a018, 0x72)
	/usr/local/go/src/runtime/netpoll.go:343 +0x85
internal/poll.(*pollDesc).wait(0x0?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0004ff918, 0xc001ffbbb8)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc0004ff900, 0x274, {0xc0007a10e0?, 0x0?, 0x37c8540?}, 0xc001ffbcc8?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc0004ff900, 0xc001ffbd90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc0004ff900)
	/usr/local/go/src/net/fd_windows.go:166 +0x54
net.(*TCPListener).accept(0xc0000a2920)
	/usr/local/go/src/net/tcpsock_posix.go:152 +0x1e
net.(*TCPListener).Accept(0xc0000a2920)
	/usr/local/go/src/net/tcpsock.go:315 +0x30
net/http.(*Server).Serve(0xc0004c81e0, {0x3c300a0, 0xc0000a2920})
	/usr/local/go/src/net/http/server.go:3056 +0x364
net/http.(*Server).ListenAndServe(0xc0004c81e0)
	/usr/local/go/src/net/http/server.go:2985 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00261cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 684
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2211 +0x13a

                                                
                                                
goroutine 2934 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002049140, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2921
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 3043 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc002813c28?, 0x0?, 0x3eb4090?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002813c80?, 0xc8e656?, 0x4f93c40?, 0xc002813ce8?, 0xc813bd?, 0x1c361990a28?, 0x87?, 0x11?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc00083d094?, 0x2f6c, 0xd27fbf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc0023fe780?, {0xc00083d094?, 0xf830e5?, 0xc000838000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc0023fe780, {0xc00083d094, 0x2f6c, 0x2f6c})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009e2a0, {0xc00083d094?, 0xf830c0?, 0xc002813e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00246cc00, {0x3c18b20, 0xc00009e2a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c18ba0, 0xc00246cc00}, {0x3c18b20, 0xc00009e2a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002813fb8?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3009
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2141 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c3c620, 0xc000180240}, 0xc002815f50, 0xc001fd86b8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c3c620, 0xc000180240}, 0x1?, 0x1?, 0xc002815fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c3c620?, 0xc000180240?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002815fd0?, 0xdfdfc7?, 0xc00206a4e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2096
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1969 [chan receive, 48 minutes]:
testing.(*testContext).waitParallel(0xc000967130)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00234cb60)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00234cb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00234cb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc00234cb60, 0xc0001d0700)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1967
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1228 [chan send, 147 minutes]:
os/exec.(*Cmd).watchCtx(0xc002a7f4a0, 0xc002a2d560)
	/usr/local/go/src/os/exec/exec.go:782 +0x3ef
created by os/exec.(*Cmd).Start in goroutine 866
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 3036 [syscall, locked to thread]:
syscall.SyscallN(0x1c361c8d960?, {0xc001fe7c28?, 0x4f53b00?, 0x3eb4090?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc001fe7c80?, 0xc8e656?, 0xc0028931e0?, 0xc001fe7ce8?, 0xc81265?, 0xcb85dc?, 0xc0028931e0?, 0xc001fe7ce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc000c3993a?, 0x2c6, 0x400?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc0005f9680?, {0xc000c3993a?, 0x0?, 0xc000c39800?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc0005f9680, {0xc000c3993a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00085a1d0, {0xc000c3993a?, 0xc001fe7e68?, 0xc001fe7e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002686150, {0x3c18b20, 0xc00085a1d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c18ba0, 0xc002686150}, {0x3c18b20, 0xc00085a1d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc000954a80?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1975
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 1975 [syscall, locked to thread]:
syscall.SyscallN(0x7ffc77ee4de0?, {0xc0023570e8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x5?, 0x0?, 0x3c0d790?, 0x0?, 0x100c0023571e8?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc00085a1a0?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc0026cd560)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002478160)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xf4a1?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc002478160)
	/usr/local/go/src/os/exec/exec.go:1005 +0x94
k8s.io/minikube/test/integration.debugLogs(0xc00261d1e0, {0xc00248e4c8, 0x15})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:566 +0x9005
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00261d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xc2c
testing.tRunner(0xc00261d1e0, 0xc0001d0d00)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1967
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1971 [chan receive, 2 minutes]:
testing.(*T).Run(0xc00234d860, {0x2d6eb7c?, 0x3c13408?}, 0xc00246cb10)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00234d860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5f0
testing.tRunner(0xc00234d860, 0xc0001d0a80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1967
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 902 [chan receive, 153 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002048ac0, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 853
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 1909 [chan receive, 11 minutes]:
testing.(*testContext).waitParallel(0xc000967130)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1571 +0x53c
testing.tRunner(0xc002901040, 0x37c7ca0)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1841
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2142 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2141
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2189 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2188
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 1967 [chan receive, 4 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1561 +0x489
testing.tRunner(0xc00234c680, 0xc002684270)
	/usr/local/go/src/testing/testing.go:1601 +0x138
created by testing.(*T).Run in goroutine 1787
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2140 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0020886d0, 0x18)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3c15ae0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002d30d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002088700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0022adf90?, {0x3c19e80, 0xc0024aa000}, 0x1, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00206a120?, 0x3b9aca00, 0x0, 0xd0?, 0xcb821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xdfdf65?, 0xc00204e000?, 0xc002a2c180?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2096
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2188 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c3c620, 0xc000180240}, 0xc0022a9f50, 0xc001fd9258?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c3c620, 0xc000180240}, 0x1?, 0x1?, 0xc0022a9fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c3c620?, 0xc000180240?}, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0022a9fd0?, 0xdfdfc7?, 0xc0021e2240?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2194
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2095 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002d30ea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2136
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2161 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002630d20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2183
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 901 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002073ce0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 853
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2607 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c3c620, 0xc000180240}, 0xc002063f50, 0xc002d30bf8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c3c620, 0xc000180240}, 0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c3c620?, 0xc000180240?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdfdf65?, 0xc0022862c0?, 0xc00206af60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2658
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 896 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc002048a90, 0x36)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3c15ae0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002073bc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002048ac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x6e696d207c202020?, {0x3c19e80, 0xc000b7a6f0}, 0x1, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x2020747261747320?, 0x3b9aca00, 0x0, 0x32?, 0x726f6d656d2d2d20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0x202020202065736c?, 0x2020202020202020?, 0x2020202020202020?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 902
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 897 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c3c620, 0xc000180240}, 0xc002199f50, 0x302e32332e317620?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c3c620, 0xc000180240}, 0x20?, 0x756620702d207c20?, 0x6c616e6f6974636e?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c3c620?, 0xc000180240?}, 0x2020202020202020?, 0x2020202020202020?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x6562756b696e696d?, 0x6e696b6e656a5c33?, 0x332e3176207c2073?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 902
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 914 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 897
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2739 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3c3c620, 0xc000180240}, 0xc002099f50, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/wait.go:205 +0xd7
k8s.io/apimachinery/pkg/util/wait.poll({0x3c3c620, 0xc000180240}, 0x50?, 0x11f1fe5?, 0xc002099ec0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3c3c620?, 0xc000180240?}, 0xc00050abe0?, 0xc00206b5c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002099fd0?, 0x11ead45?, 0xc000954780?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2704
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2187 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0025c0550, 0x16)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3c15ae0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002630c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0025c0580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4edc400?, {0x3c19e80, 0xc0027cc690}, 0x1, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00287a7e0?, 0x3b9aca00, 0x0, 0xd0?, 0xcb821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xdfdf65?, 0xc00248a000?, 0xc00085e300?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2194
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2608 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2607
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2925 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc002049110, 0x0)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3c15ae0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002d309c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002049140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002533f88?, {0x3c19e80, 0xc002546030}, 0x1, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0xcb821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc002533fd0?, 0xdfdfc7?, 0xc0026230e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2934
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1787 [chan receive, 48 minutes]:
testing.(*T).Run(0xc00234c000, {0x2d6eb77?, 0xce806d?}, 0xc002684270)
	/usr/local/go/src/testing/testing.go:1649 +0x3c8
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00234c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00234c000, 0x37c7a80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 1976 [syscall, locked to thread]:
syscall.SyscallN(0x7ffc77ee4de0?, {0xc0021f70e8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x5?, 0xc0021f7180?, 0x3c0d790?, 0xc0021f7168?, 0x100c0021f71e8?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc000742080?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc001fd6d80)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00204e000)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0x4186?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc00204e000)
	/usr/local/go/src/os/exec/exec.go:1005 +0x94
k8s.io/minikube/test/integration.debugLogs(0xc00261d380, {0xc000885350, 0xd})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:650 +0xb9dc
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00261d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xc2c
testing.tRunner(0xc00261d380, 0xc0001d0d80)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1967
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2096 [chan receive, 37 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002088700, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2136
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 1970 [chan receive, 48 minutes]:
testing.(*testContext).waitParallel(0xc000967130)
	/usr/local/go/src/testing/testing.go:1715 +0xac
testing.(*T).Parallel(0xc00234d6c0)
	/usr/local/go/src/testing/testing.go:1404 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc00234d6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00234d6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x33c
testing.tRunner(0xc00234d6c0, 0xc0001d0900)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1967
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2865 [syscall, locked to thread]:
syscall.SyscallN(0x1c367402a68?, {0xc0020cbc28?, 0x4f53b00?, 0x3eb4090?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0020cbc80?, 0xc8e656?, 0x4f93c40?, 0xc0020cbce8?, 0xc813bd?, 0x1c361990598?, 0x20000?, 0x20?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc0022c50ff?, 0x8f01, 0xd27fbf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc00095db80?, {0xc0022c50ff?, 0xc0020cbf80?, 0xc0022ae000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc00095db80, {0xc0022c50ff, 0x8f01, 0x8f01})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00085a478, {0xc0022c50ff?, 0x4182?, 0x4182?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0023459e0, {0x3c18b20, 0xc00085a478})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c18ba0, 0xc0023459e0}, {0x3c18b20, 0xc00085a478}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc002a2c1e0?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2863
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2933 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002d30ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2921
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3044 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc002478f20, 0xc002902300)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3009
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 2703 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0020739e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2702
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 2606 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000b4f690, 0xf)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3c15ae0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002631080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b4f6c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0020b7f88?, {0x3c19e80, 0xc002926780}, 0x1, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0xd0?, 0xcb821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc0020b7fd0?, 0xdfdfc7?, 0x7379535c53574f44?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2658
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3024 [syscall, locked to thread]:
syscall.SyscallN(0x4f6a480?, {0xc0020b7c28?, 0x0?, 0x3eb4090?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0020b7c80?, 0xc8e656?, 0xc0001cfba0?, 0xc0020b7ce8?, 0xc81265?, 0xcb85dc?, 0xc0001cfba0?, 0xc0020b7ce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc0022f093a?, 0x2c6, 0x400?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc0004a5180?, {0xc0022f093a?, 0x0?, 0xc0022f0800?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc0004a5180, {0xc0022f093a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000742088, {0xc0022f093a?, 0xc0020b7e68?, 0xc0020b7e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002448090, {0x3c18b20, 0xc000742088})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c18ba0, 0xc002448090}, {0x3c18b20, 0xc000742088}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0020b7fb8?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1976
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2930 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc00204eb00, 0xc002a2baa0)
	/usr/local/go/src/os/exec/exec.go:757 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2863
	/usr/local/go/src/os/exec/exec.go:743 +0xa34

                                                
                                                
goroutine 3042 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x2065687420646e69?, {0xc002481c28?, 0xa2e646569666963?, 0x697461726765746e?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc002481c80?, 0xc8e656?, 0xc0021a6820?, 0xc002481ce8?, 0xc81265?, 0xcb85dc?, 0xc0021a6820?, 0xc002481ce0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc0025b4a43?, 0x5bd, 0xd27fbf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc0023fe280?, {0xc0025b4a43?, 0x0?, 0xc0025b4800?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc0023fe280, {0xc0025b4a43, 0x5bd, 0x5bd})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009e288, {0xc0025b4a43?, 0xc002481e68?, 0xc002481e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00246cbd0, {0x3c18b20, 0xc00009e288})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c18ba0, 0xc00246cbd0}, {0x3c18b20, 0xc00009e288}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc0001d0d80?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3009
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2658 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b4f6c0, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2644
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                                
goroutine 2641 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0026311a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:276 +0x305
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2644
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/delaying_queue.go:113 +0x21f

                                                
                                                
goroutine 3009 [syscall, 2 minutes, locked to thread]:
syscall.SyscallN(0x7ffc77ee4de0?, {0xc0021e9ba8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0xa?, 0xc0021e9cc0?, 0xc0021e9bb0?, 0xc0021e9ce0?, 0x100c0021e9ca8?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0xc00009e280?, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1145 +0x5d
os.(*Process).wait(0xc00273e8a0)
	/usr/local/go/src/os/exec_windows.go:18 +0x55
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002478f20)
	/usr/local/go/src/os/exec/exec.go:890 +0x45
os/exec.(*Cmd).Run(0xc000505a00?)
	/usr/local/go/src/os/exec/exec.go:590 +0x2d
k8s.io/minikube/test/integration.Run(0xc000505a00, 0xc002478f20)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1ed
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000505a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc000505a00, 0xc00246cb10)
	/usr/local/go/src/testing/testing.go:1595 +0xff
created by testing.(*T).Run in goroutine 1971
	/usr/local/go/src/testing/testing.go:1648 +0x3ad

                                                
                                                
goroutine 2927 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2926
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2738 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000b4f0d0, 0x1)
	/usr/local/go/src/runtime/sema.go:527 +0x15d
sync.(*Cond).Wait(0x3c15ae0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0020738c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/util/workqueue/queue.go:200 +0x99
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b4f100)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0020b3f20?, {0x3c19e80, 0xc0029270b0}, 0x1, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00206a120?, 0x3b9aca00, 0x0, 0xd0?, 0xcb821c?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(0xc0020b3fd0?, 0x11fca45?, 0xc000954780?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/backoff.go:161 +0x1e
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2704
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2864 [syscall, locked to thread]:
syscall.SyscallN(0x4f67900?, {0xc0020b5c28?, 0xcb8e51?, 0x3ecf4a0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc0020b5c80?, 0xc8e656?, 0x4f93c40?, 0xc0020b5ce8?, 0xc813bd?, 0x1c361990108?, 0x2cc524d?, 0xc785e5?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x0?, {0xc0023162b9?, 0x547, 0xd27fbf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1024 +0x8e
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:438
syscall.Read(0xc00095ca00?, {0xc0023162b9?, 0xcc09d0?, 0xc002316000?})
	/usr/local/go/src/syscall/syscall_windows.go:417 +0x2d
internal/poll.(*FD).Read(0xc00095ca00, {0xc0023162b9, 0x547, 0x547})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00085a460, {0xc0023162b9?, 0xc00206bb00?, 0xc0020b5e68?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0023459b0, {0x3c18b20, 0xc00085a460})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c18ba0, 0xc0023459b0}, {0x3c18b20, 0xc00085a460}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:416 +0x147
io.Copy(...)
	/usr/local/go/src/io/io.go:389
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:560 +0x34
os/exec.(*Cmd).Start.func2(0xc000954a80?)
	/usr/local/go/src/os/exec/exec.go:717 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2863
	/usr/local/go/src/os/exec/exec.go:716 +0xa75

                                                
                                                
goroutine 2740 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:297 +0x1c5
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2739
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.29.0/pkg/util/wait/poll.go:280 +0xc5

                                                
                                                
goroutine 2704 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b4f100, 0xc000180240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2702
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.29.0/transport/cache.go:122 +0x594

                                                
                                    

Test pass (159/212)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.71
4 TestDownloadOnly/v1.16.0/preload-exists 0.08
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
9 TestDownloadOnly/v1.16.0/DeleteAll 1.25
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 1.23
12 TestDownloadOnly/v1.28.4/json-events 13.88
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.26
18 TestDownloadOnly/v1.28.4/DeleteAll 1.31
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 1.28
21 TestDownloadOnly/v1.29.0-rc.2/json-events 13.19
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.26
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 1.33
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 1.28
30 TestBinaryMirror 7.16
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.3
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.28
36 TestAddons/Setup 375.85
39 TestAddons/parallel/Ingress 67.75
40 TestAddons/parallel/InspektorGadget 26.41
41 TestAddons/parallel/MetricsServer 22.09
42 TestAddons/parallel/HelmTiller 28.47
44 TestAddons/parallel/CSI 109.02
45 TestAddons/parallel/Headlamp 37.23
46 TestAddons/parallel/CloudSpanner 22.07
47 TestAddons/parallel/LocalPath 31.04
48 TestAddons/parallel/NvidiaDevicePlugin 20.69
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.36
53 TestAddons/StoppedEnableDisable 46.83
54 TestCertOptions 337.55
56 TestDockerFlags 571.51
57 TestForceSystemdFlag 381.21
65 TestErrorSpam/start 17.28
66 TestErrorSpam/status 36.62
67 TestErrorSpam/pause 22.61
68 TestErrorSpam/unpause 22.75
69 TestErrorSpam/stop 50.72
72 TestFunctional/serial/CopySyncFile 0.03
73 TestFunctional/serial/StartWithProxy 200.27
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 108.18
76 TestFunctional/serial/KubeContext 0.15
77 TestFunctional/serial/KubectlGetPods 0.25
80 TestFunctional/serial/CacheCmd/cache/add_remote 27.33
81 TestFunctional/serial/CacheCmd/cache/add_local 10.83
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.28
83 TestFunctional/serial/CacheCmd/cache/list 0.26
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.41
85 TestFunctional/serial/CacheCmd/cache/cache_reload 36.9
86 TestFunctional/serial/CacheCmd/cache/delete 0.56
87 TestFunctional/serial/MinikubeKubectlCmd 0.5
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 3.28
89 TestFunctional/serial/ExtraConfig 120.41
90 TestFunctional/serial/ComponentHealth 0.19
91 TestFunctional/serial/LogsCmd 8.48
92 TestFunctional/serial/LogsFileCmd 10.6
93 TestFunctional/serial/InvalidService 20.6
99 TestFunctional/parallel/StatusCmd 42.78
103 TestFunctional/parallel/ServiceCmdConnect 28.38
104 TestFunctional/parallel/AddonsCmd 0.82
105 TestFunctional/parallel/PersistentVolumeClaim 39.88
107 TestFunctional/parallel/SSHCmd 20.94
108 TestFunctional/parallel/CpCmd 61.94
109 TestFunctional/parallel/MySQL 58.39
110 TestFunctional/parallel/FileSync 11.26
111 TestFunctional/parallel/CertSync 66.15
115 TestFunctional/parallel/NodeLabels 0.22
117 TestFunctional/parallel/NonActiveRuntimeDisabled 11.33
119 TestFunctional/parallel/License 3.5
120 TestFunctional/parallel/ServiceCmd/DeployApp 18.46
121 TestFunctional/parallel/Version/short 0.41
122 TestFunctional/parallel/Version/components 9.09
123 TestFunctional/parallel/ImageCommands/ImageListShort 7.53
124 TestFunctional/parallel/ImageCommands/ImageListTable 7.66
125 TestFunctional/parallel/ImageCommands/ImageListJson 7.54
126 TestFunctional/parallel/ImageCommands/ImageListYaml 7.69
127 TestFunctional/parallel/ImageCommands/ImageBuild 27.29
128 TestFunctional/parallel/ImageCommands/Setup 4.21
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 24.49
130 TestFunctional/parallel/ServiceCmd/List 14.29
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 21.29
132 TestFunctional/parallel/ServiceCmd/JSONOutput 14.13
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 27.37
135 TestFunctional/parallel/ProfileCmd/profile_not_create 9.43
137 TestFunctional/parallel/ProfileCmd/profile_list 8.71
138 TestFunctional/parallel/ProfileCmd/profile_json_output 8.78
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 9.04
141 TestFunctional/parallel/ImageCommands/ImageRemove 17.18
143 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.72
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 20.54
145 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.7
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
154 TestFunctional/parallel/DockerEnv/powershell 46.58
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 10.43
156 TestFunctional/parallel/UpdateContextCmd/no_changes 2.76
157 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.65
158 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.59
159 TestFunctional/delete_addon-resizer_images 0.47
160 TestFunctional/delete_my-image_image 0.2
161 TestFunctional/delete_minikube_cached_images 0.2
165 TestImageBuild/serial/Setup 187.52
166 TestImageBuild/serial/NormalBuild 9.08
167 TestImageBuild/serial/BuildWithBuildArg 8.61
168 TestImageBuild/serial/BuildWithDockerIgnore 7.63
169 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.5
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 68.14
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 1.57
207 TestMainNoArgs 0.25
208 TestMinikubeProfile 489.38
211 TestMountStart/serial/StartWithMountFirst 148.22
212 TestMountStart/serial/VerifyMountFirst 9.61
213 TestMountStart/serial/StartWithMountSecond 148.62
214 TestMountStart/serial/VerifyMountSecond 9.52
215 TestMountStart/serial/DeleteFirst 26.05
216 TestMountStart/serial/VerifyMountPostDelete 9.4
217 TestMountStart/serial/Stop 21.83
218 TestMountStart/serial/RestartStopped 111.04
219 TestMountStart/serial/VerifyMountPostStop 9.49
222 TestMultiNode/serial/FreshStart2Nodes 404.18
223 TestMultiNode/serial/DeployApp2Nodes 9.42
225 TestMultiNode/serial/AddNode 213.16
226 TestMultiNode/serial/MultiNodeLabels 0.2
227 TestMultiNode/serial/ProfileList 7.67
228 TestMultiNode/serial/CopyFile 359.69
229 TestMultiNode/serial/StopNode 66.29
230 TestMultiNode/serial/StartAfterStop 165.46
235 TestPreload 463.77
241 TestRunningBinaryUpgrade 1063.97
243 TestKubernetesUpgrade 1152.07
246 TestNoKubernetes/serial/StartNoK8sWithVersion 0.32
256 TestPause/serial/Start 384.74
257 TestPause/serial/SecondStartNoReconfiguration 338.18
258 TestPause/serial/Pause 8.58
259 TestPause/serial/VerifyStatus 13.14
260 TestPause/serial/Unpause 8.63
261 TestPause/serial/PauseAgain 8.7
262 TestPause/serial/DeletePaused 54.65
274 TestPause/serial/VerifyDeletedResources 10.17
275 TestStoppedBinaryUpgrade/Setup 1.07
276 TestStoppedBinaryUpgrade/Upgrade 876.5
291 TestStoppedBinaryUpgrade/MinikubeLogs 9.13
x
+
TestDownloadOnly/v1.16.0/json-events (17.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-131100 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-131100 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv: (17.7138425s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-131100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-131100: exit status 85 (275.9492ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-131100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:36 UTC |          |
	|         | -p download-only-131100        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 01:36:26
	Running on machine: minikube3
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 01:36:26.304037    9012 out.go:296] Setting OutFile to fd 624 ...
	I0116 01:36:26.305146    9012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:36:26.305146    9012 out.go:309] Setting ErrFile to fd 628...
	I0116 01:36:26.305146    9012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0116 01:36:26.318474    9012 root.go:314] Error reading config file at C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0116 01:36:26.330969    9012 out.go:303] Setting JSON to true
	I0116 01:36:26.334146    9012 start.go:128] hostinfo: {"hostname":"minikube3","uptime":47377,"bootTime":1705321609,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0116 01:36:26.334146    9012 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0116 01:36:26.336169    9012 out.go:97] [download-only-131100] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0116 01:36:26.337461    9012 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 01:36:26.336688    9012 notify.go:220] Checking for updates...
	I0116 01:36:26.338206    9012 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	W0116 01:36:26.336688    9012 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0116 01:36:26.338943    9012 out.go:169] MINIKUBE_LOCATION=17967
	I0116 01:36:26.339988    9012 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0116 01:36:26.340790    9012 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 01:36:26.341811    9012 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 01:36:31.968323    9012 out.go:97] Using the hyperv driver based on user configuration
	I0116 01:36:31.968404    9012 start.go:298] selected driver: hyperv
	I0116 01:36:31.968404    9012 start.go:902] validating driver "hyperv" against <nil>
	I0116 01:36:31.968471    9012 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 01:36:32.019363    9012 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0116 01:36:32.020389    9012 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 01:36:32.020389    9012 cni.go:84] Creating CNI manager for ""
	I0116 01:36:32.020948    9012 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0116 01:36:32.021041    9012 start_flags.go:321] config:
	{Name:download-only-131100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-131100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 01:36:32.022657    9012 iso.go:125] acquiring lock: {Name:mk2c0b62d272a573835231fdc54419c800e07e34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 01:36:32.023918    9012 out.go:97] Downloading VM boot image ...
	I0116 01:36:32.024120    9012 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.32.1-1703784139-17866-amd64.iso
	I0116 01:36:36.010795    9012 out.go:97] Starting control plane node download-only-131100 in cluster download-only-131100
	I0116 01:36:36.010795    9012 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0116 01:36:36.104417    9012 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0116 01:36:36.105448    9012 cache.go:56] Caching tarball of preloaded images
	I0116 01:36:36.105907    9012 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0116 01:36:36.106812    9012 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0116 01:36:36.106812    9012 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0116 01:36:36.168866    9012 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0116 01:36:40.199481    9012 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0116 01:36:40.200657    9012 preload.go:256] verifying checksum of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0116 01:36:41.210146    9012 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0116 01:36:41.210146    9012 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-131100\config.json ...
	I0116 01:36:41.210146    9012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-131100\config.json: {Name:mk2cef9efde864b37c9b3ef7789511e09c8b9ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:36:41.212146    9012 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0116 01:36:41.213137    9012 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-131100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 01:36:44.021122    3976 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (1.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2497576s)
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (1.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (1.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-131100
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-131100: (1.2316814s)
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (1.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (13.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-399400 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-399400 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=hyperv: (13.8828516s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (13.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-399400
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-399400: exit status 85 (260.8173ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-131100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:36 UTC |                     |
	|         | -p download-only-131100        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:36 UTC | 16 Jan 24 01:36 UTC |
	| delete  | -p download-only-131100        | download-only-131100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:36 UTC | 16 Jan 24 01:36 UTC |
	| start   | -o=json --download-only        | download-only-399400 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:36 UTC |                     |
	|         | -p download-only-399400        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 01:36:46
	Running on machine: minikube3
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 01:36:46.863183   10092 out.go:296] Setting OutFile to fd 508 ...
	I0116 01:36:46.863905   10092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:36:46.863905   10092 out.go:309] Setting ErrFile to fd 712...
	I0116 01:36:46.863905   10092 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:36:46.886101   10092 out.go:303] Setting JSON to true
	I0116 01:36:46.889431   10092 start.go:128] hostinfo: {"hostname":"minikube3","uptime":47397,"bootTime":1705321609,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0116 01:36:46.889431   10092 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0116 01:36:46.890746   10092 out.go:97] [download-only-399400] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0116 01:36:46.890746   10092 notify.go:220] Checking for updates...
	I0116 01:36:46.892071   10092 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 01:36:46.892839   10092 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0116 01:36:46.893058   10092 out.go:169] MINIKUBE_LOCATION=17967
	I0116 01:36:46.893770   10092 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0116 01:36:46.895192   10092 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 01:36:46.896346   10092 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 01:36:52.260923   10092 out.go:97] Using the hyperv driver based on user configuration
	I0116 01:36:52.261449   10092 start.go:298] selected driver: hyperv
	I0116 01:36:52.261449   10092 start.go:902] validating driver "hyperv" against <nil>
	I0116 01:36:52.261449   10092 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 01:36:52.314608   10092 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0116 01:36:52.315960   10092 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 01:36:52.315960   10092 cni.go:84] Creating CNI manager for ""
	I0116 01:36:52.315960   10092 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0116 01:36:52.315960   10092 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 01:36:52.315960   10092 start_flags.go:321] config:
	{Name:download-only-399400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-399400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 01:36:52.316665   10092 iso.go:125] acquiring lock: {Name:mk2c0b62d272a573835231fdc54419c800e07e34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 01:36:52.318703   10092 out.go:97] Starting control plane node download-only-399400 in cluster download-only-399400
	I0116 01:36:52.318703   10092 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0116 01:36:52.367135   10092 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0116 01:36:52.367135   10092 cache.go:56] Caching tarball of preloaded images
	I0116 01:36:52.367652   10092 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0116 01:36:52.368463   10092 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0116 01:36:52.368753   10092 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0116 01:36:52.431442   10092 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4?checksum=md5:7ebdea7754e21f51b865dbfc36b53b7d -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I0116 01:36:56.825145   10092 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	I0116 01:36:56.826808   10092 preload.go:256] verifying checksum of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-399400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 01:37:00.672455   10820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (1.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3066683s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (1.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-399400
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-399400: (1.2829763s)
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (1.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (13.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-690000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-690000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=hyperv: (13.185446s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (13.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-690000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-690000: exit status 85 (260.7428ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-131100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:36 UTC |                     |
	|         | -p download-only-131100           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:36 UTC | 16 Jan 24 01:36 UTC |
	| delete  | -p download-only-131100           | download-only-131100 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:36 UTC | 16 Jan 24 01:36 UTC |
	| start   | -o=json --download-only           | download-only-399400 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:36 UTC |                     |
	|         | -p download-only-399400           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	| delete  | --all                             | minikube             | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC | 16 Jan 24 01:37 UTC |
	| delete  | -p download-only-399400           | download-only-399400 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC | 16 Jan 24 01:37 UTC |
	| start   | -o=json --download-only           | download-only-690000 | minikube3\jenkins | v1.32.0 | 16 Jan 24 01:37 UTC |                     |
	|         | -p download-only-690000           |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr         |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker        |                      |                   |         |                     |                     |
	|         | --driver=hyperv                   |                      |                   |         |                     |                     |
	|---------|-----------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/16 01:37:03
	Running on machine: minikube3
	Binary: Built with gc go1.21.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0116 01:37:03.588423    8968 out.go:296] Setting OutFile to fd 756 ...
	I0116 01:37:03.589436    8968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:37:03.589436    8968 out.go:309] Setting ErrFile to fd 584...
	I0116 01:37:03.589436    8968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 01:37:03.610427    8968 out.go:303] Setting JSON to true
	I0116 01:37:03.613634    8968 start.go:128] hostinfo: {"hostname":"minikube3","uptime":47414,"bootTime":1705321609,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0116 01:37:03.613634    8968 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0116 01:37:03.615268    8968 out.go:97] [download-only-690000] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0116 01:37:03.615268    8968 notify.go:220] Checking for updates...
	I0116 01:37:03.616122    8968 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 01:37:03.616940    8968 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0116 01:37:03.617536    8968 out.go:169] MINIKUBE_LOCATION=17967
	I0116 01:37:03.618084    8968 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0116 01:37:03.619626    8968 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0116 01:37:03.620212    8968 driver.go:392] Setting default libvirt URI to qemu:///system
	I0116 01:37:09.138018    8968 out.go:97] Using the hyperv driver based on user configuration
	I0116 01:37:09.138212    8968 start.go:298] selected driver: hyperv
	I0116 01:37:09.138212    8968 start.go:902] validating driver "hyperv" against <nil>
	I0116 01:37:09.138487    8968 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0116 01:37:09.193496    8968 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0116 01:37:09.193753    8968 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0116 01:37:09.194768    8968 cni.go:84] Creating CNI manager for ""
	I0116 01:37:09.194768    8968 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0116 01:37:09.194768    8968 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0116 01:37:09.194768    8968 start_flags.go:321] config:
	{Name:download-only-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-690000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0116 01:37:09.194768    8968 iso.go:125] acquiring lock: {Name:mk2c0b62d272a573835231fdc54419c800e07e34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0116 01:37:09.195754    8968 out.go:97] Starting control plane node download-only-690000 in cluster download-only-690000
	I0116 01:37:09.196639    8968 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0116 01:37:09.263297    8968 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0116 01:37:09.263297    8968 cache.go:56] Caching tarball of preloaded images
	I0116 01:37:09.263618    8968 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0116 01:37:09.264727    8968 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0116 01:37:09.264727    8968 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0116 01:37:09.332816    8968 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4?checksum=md5:47acda482c3add5b56147c92b8d7f468 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4
	I0116 01:37:12.407758    8968 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0116 01:37:12.408740    8968 preload.go:256] verifying checksum of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-amd64.tar.lz4 ...
	I0116 01:37:13.412395    8968 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0116 01:37:13.412395    8968 profile.go:148] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-690000\config.json ...
	I0116 01:37:13.413398    8968 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-690000\config.json: {Name:mk0e088e1adf334466ec2042880a1a6785d84b3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0116 01:37:13.414407    8968 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0116 01:37:13.415398    8968 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\windows\amd64\v1.29.0-rc.2/kubectl.exe
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-690000"

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 01:37:16.709279    9928 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3264287s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (1.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-690000
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-690000: (1.2772915s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (1.28s)

                                                
                                    
x
+
TestBinaryMirror (7.16s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-226800 --alsologtostderr --binary-mirror http://127.0.0.1:52661 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-226800 --alsologtostderr --binary-mirror http://127.0.0.1:52661 --driver=hyperv: (6.2507787s)
helpers_test.go:175: Cleaning up "binary-mirror-226800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-226800
--- PASS: TestBinaryMirror (7.16s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.3s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-179200
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-179200: exit status 85 (299.3803ms)

                                                
                                                
-- stdout --
	* Profile "addons-179200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-179200"

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 01:37:30.536473    1716 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-179200
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-179200: exit status 85 (282.7416ms)

                                                
                                                
-- stdout --
	* Profile "addons-179200" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-179200"

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 01:37:30.536473   12808 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/Setup (375.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-179200 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-179200 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m15.8500497s)
--- PASS: TestAddons/Setup (375.85s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (67.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-179200 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-179200 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-179200 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [765b91df-e5d1-4151-b3d7-5a58bf5178aa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [765b91df-e5d1-4151-b3d7-5a58bf5178aa] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.020613s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-179200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-179200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.725757s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-179200 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0116 01:44:59.391718    8772 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-179200 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-179200 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-179200 ip: (2.9399697s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.27.117.123
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-179200 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-179200 addons disable ingress-dns --alsologtostderr -v=1: (18.0315022s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-179200 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-179200 addons disable ingress --alsologtostderr -v=1: (21.7152793s)
--- PASS: TestAddons/parallel/Ingress (67.75s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (26.41s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-c96rx" [2daed79d-465f-4e36-a1dd-2064c909ba34] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0126585s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-179200
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-179200: (21.3970578s)
--- PASS: TestAddons/parallel/InspektorGadget (26.41s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.09s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 21.4752ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-xfkvg" [928ff4ee-69ae-4891-864a-ec481c75490f] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0228006s
addons_test.go:415: (dbg) Run:  kubectl --context addons-179200 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-179200 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-179200 addons disable metrics-server --alsologtostderr -v=1: (16.8492433s)
--- PASS: TestAddons/parallel/MetricsServer (22.09s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (28.47s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 5.8192ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-nbhrz" [4a1ef8a5-41c6-4e54-ad20-d0754f040ae0] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0114311s
addons_test.go:473: (dbg) Run:  kubectl --context addons-179200 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-179200 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.1165978s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-179200 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-179200 addons disable helm-tiller --alsologtostderr -v=1: (16.3187759s)
--- PASS: TestAddons/parallel/HelmTiller (28.47s)

                                                
                                    
x
+
TestAddons/parallel/CSI (109.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 24.5728ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-179200 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-179200 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cbfa51db-b463-4111-aead-b04684bb72b3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cbfa51db-b463-4111-aead-b04684bb72b3] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 22.0179666s
addons_test.go:584: (dbg) Run:  kubectl --context addons-179200 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-179200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-179200 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-179200 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-179200 delete pod task-pv-pod: (1.2684909s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-179200 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-179200 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-179200 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [87d03faf-84a9-4ed1-841b-424d8e533e6c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [87d03faf-84a9-4ed1-841b-424d8e533e6c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0113922s
addons_test.go:626: (dbg) Run:  kubectl --context addons-179200 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-179200 delete pod task-pv-pod-restore: (1.4551202s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-179200 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-179200 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-179200 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-179200 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.194581s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-179200 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-179200 addons disable volumesnapshots --alsologtostderr -v=1: (15.593469s)
--- PASS: TestAddons/parallel/CSI (109.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (37.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-179200 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-179200 --alsologtostderr -v=1: (18.2124255s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-njlbv" [719b8d99-559a-4d65-9bf0-9a2fc37718b7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-njlbv" [719b8d99-559a-4d65-9bf0-9a2fc37718b7] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-njlbv" [719b8d99-559a-4d65-9bf0-9a2fc37718b7] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.0202431s
--- PASS: TestAddons/parallel/Headlamp (37.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (22.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-7ngts" [d0c5f259-be15-46c4-96a3-0cea91dee714] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0232704s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-179200
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-179200: (17.0295646s)
--- PASS: TestAddons/parallel/CloudSpanner (22.07s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (31.04s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-179200 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-179200 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-179200 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9ce8645a-c922-4087-a78d-f47eed902baa] Pending
helpers_test.go:344: "test-local-path" [9ce8645a-c922-4087-a78d-f47eed902baa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9ce8645a-c922-4087-a78d-f47eed902baa] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9ce8645a-c922-4087-a78d-f47eed902baa] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0119921s
addons_test.go:891: (dbg) Run:  kubectl --context addons-179200 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-179200 ssh "cat /opt/local-path-provisioner/pvc-2518c260-1d7d-459f-be84-8322ff56bda9_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-179200 ssh "cat /opt/local-path-provisioner/pvc-2518c260-1d7d-459f-be84-8322ff56bda9_default_test-pvc/file1": (10.6238533s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-179200 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-179200 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-179200 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-179200 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (7.6634353s)
--- PASS: TestAddons/parallel/LocalPath (31.04s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.69s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xbgtm" [4444260f-9552-477a-8f00-7b18a379d51e] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0553989s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-179200
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-179200: (15.6289057s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.69s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-jpxm4" [f2d27100-f20e-4155-845a-fb8ec24f4f68] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0089598s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-179200 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-179200 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (46.83s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-179200
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-179200: (34.99799s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-179200
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-179200: (4.7916129s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-179200
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-179200: (4.493212s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-179200
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-179200: (2.5450939s)
--- PASS: TestAddons/StoppedEnableDisable (46.83s)

                                                
                                    
x
+
TestCertOptions (337.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-072400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-072400 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (4m40.0967567s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-072400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-072400 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (9.9932037s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-072400 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-072400 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-072400 -- "sudo cat /etc/kubernetes/admin.conf": (9.8432603s)
helpers_test.go:175: Cleaning up "cert-options-072400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-072400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-072400: (37.4587629s)
--- PASS: TestCertOptions (337.55s)

                                                
                                    
x
+
TestDockerFlags (571.51s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-717200 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
E0116 03:38:13.045428   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-717200 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (8m18.9914761s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-717200 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-717200 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.2502964s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-717200 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
E0116 03:45:09.862681   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-717200 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (11.0603858s)
helpers_test.go:175: Cleaning up "docker-flags-717200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-717200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-717200: (51.203454s)
--- PASS: TestDockerFlags (571.51s)

                                                
                                    
x
+
TestForceSystemdFlag (381.21s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-175600 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-175600 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (5m27.7642288s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-175600 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-175600 ssh "docker info --format {{.CgroupDriver}}": (9.7301011s)
helpers_test.go:175: Cleaning up "force-systemd-flag-175600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-175600
E0116 03:36:16.263838   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-175600: (43.7125832s)
--- PASS: TestForceSystemdFlag (381.21s)

                                                
                                    
x
+
TestErrorSpam/start (17.28s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 start --dry-run: (5.7330353s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 start --dry-run: (5.75669s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 start --dry-run: (5.7821766s)
--- PASS: TestErrorSpam/start (17.28s)

                                                
                                    
x
+
TestErrorSpam/status (36.62s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 status
E0116 01:51:30.541623   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 status: (12.6456536s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 status: (11.9872642s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 status: (11.9826524s)
--- PASS: TestErrorSpam/status (36.62s)

                                                
                                    
x
+
TestErrorSpam/pause (22.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 pause: (7.7469342s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 pause: (7.4755792s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 pause: (7.3867525s)
--- PASS: TestErrorSpam/pause (22.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 unpause: (7.5363988s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 unpause: (7.6992038s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 unpause: (7.5119182s)
--- PASS: TestErrorSpam/unpause (22.75s)

                                                
                                    
x
+
TestErrorSpam/stop (50.72s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 stop: (33.0884614s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 stop: (8.951508s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-827200 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-827200 stop: (8.6795265s)
--- PASS: TestErrorSpam/stop (50.72s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\13508\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (200.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-833600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0116 01:54:14.397485   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
functional_test.go:2233: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-833600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m20.2549965s)
--- PASS: TestFunctional/serial/StartWithProxy (200.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (108.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-833600 --alsologtostderr -v=8
E0116 01:58:46.580327   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-833600 --alsologtostderr -v=8: (1m48.1732639s)
functional_test.go:659: soft start took 1m48.1746254s for "functional-833600" cluster.
--- PASS: TestFunctional/serial/SoftStart (108.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.15s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-833600 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (27.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 cache add registry.k8s.io/pause:3.1: (9.5585492s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 cache add registry.k8s.io/pause:3.3: (8.9981937s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 cache add registry.k8s.io/pause:latest: (8.7721356s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (27.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-833600 C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2118033284\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-833600 C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2118033284\001: (2.2329594s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 cache add minikube-local-cache-test:functional-833600
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 cache add minikube-local-cache-test:functional-833600: (8.0614613s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 cache delete minikube-local-cache-test:functional-833600
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-833600
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh sudo crictl images: (9.413383s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (36.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.4688099s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-833600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.7636215s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:00:00.607501    6044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 cache reload: (8.2974816s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.3740586s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (36.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 kubectl -- --context functional-833600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (3.28s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out\kubectl.exe --context functional-833600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (3.28s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (120.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-833600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-833600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m0.4115153s)
functional_test.go:757: restart took 2m0.4118085s for "functional-833600" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (120.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-833600 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 logs: (8.4772789s)
--- PASS: TestFunctional/serial/LogsCmd (8.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.6s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2303583958\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2303583958\001\logs.txt: (10.5950595s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.60s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (20.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-833600 apply -f testdata\invalidsvc.yaml
functional_test.go:2334: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-833600
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-833600: exit status 115 (16.6485542s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.27.127.147:32099 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:02:55.445323    1896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_service_5a553248039ac2ab6beea740c8d8ce1b809666c7_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-833600 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (20.60s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (42.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 status: (13.9860341s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.4184095s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 status -o json: (14.3696237s)
--- PASS: TestFunctional/parallel/StatusCmd (42.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-833600 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-833600 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-zh67k" [fcf94d3c-1bfc-4298-bc2b-c803464af54a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-zh67k" [fcf94d3c-1bfc-4298-bc2b-c803464af54a] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.0107309s
functional_test.go:1648: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 service hello-node-connect --url
functional_test.go:1648: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 service hello-node-connect --url: (18.8490055s)
functional_test.go:1654: found endpoint for hello-node-connect: http://172.27.127.147:31298
functional_test.go:1674: http://172.27.127.147:31298: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-zh67k

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.27.127.147:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.27.127.147:31298
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (28.38s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [409a81a6-c8b6-45d6-b354-8198b0c4d9dc] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0164071s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-833600 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-833600 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-833600 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-833600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [30314c49-a55c-4a93-a306-8f027ce3b19f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [30314c49-a55c-4a93-a306-8f027ce3b19f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.0201842s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-833600 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-833600 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-833600 delete -f testdata/storage-provisioner/pod.yaml: (1.3578905s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-833600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e60d1b5c-5616-4f18-9835-c8c2b7103843] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e60d1b5c-5616-4f18-9835-c8c2b7103843] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0223921s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-833600 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.88s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (20.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh "echo hello"
functional_test.go:1724: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh "echo hello": (10.4173999s)
functional_test.go:1741: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh "cat /etc/hostname"
functional_test.go:1741: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh "cat /etc/hostname": (10.5211883s)
--- PASS: TestFunctional/parallel/SSHCmd (20.94s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (61.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.836441s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh -n functional-833600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh -n functional-833600 "sudo cat /home/docker/cp-test.txt": (10.7588719s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 cp functional-833600:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd4065583377\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 cp functional-833600:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd4065583377\001\cp-test.txt: (11.1652621s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh -n functional-833600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh -n functional-833600 "sudo cat /home/docker/cp-test.txt": (11.142787s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.377277s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh -n functional-833600 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh -n functional-833600 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.6526026s)
--- PASS: TestFunctional/parallel/CpCmd (61.94s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (58.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-833600 replace --force -f testdata\mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-74qcz" [0e9d03ab-c8fa-4c69-9f12-54c7b006aa2c] Pending
helpers_test.go:344: "mysql-859648c796-74qcz" [0e9d03ab-c8fa-4c69-9f12-54c7b006aa2c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-74qcz" [0e9d03ab-c8fa-4c69-9f12-54c7b006aa2c] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 48.0103992s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-833600 exec mysql-859648c796-74qcz -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-833600 exec mysql-859648c796-74qcz -- mysql -ppassword -e "show databases;": exit status 1 (390.0152ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-833600 exec mysql-859648c796-74qcz -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-833600 exec mysql-859648c796-74qcz -- mysql -ppassword -e "show databases;": exit status 1 (476.045ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-833600 exec mysql-859648c796-74qcz -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-833600 exec mysql-859648c796-74qcz -- mysql -ppassword -e "show databases;": exit status 1 (332.8278ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-833600 exec mysql-859648c796-74qcz -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-833600 exec mysql-859648c796-74qcz -- mysql -ppassword -e "show databases;": exit status 1 (288.8579ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-833600 exec mysql-859648c796-74qcz -- mysql -ppassword -e "show databases;"
E0116 02:08:46.589498   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (58.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/13508/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /etc/test/nested/copy/13508/hosts"
functional_test.go:1930: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /etc/test/nested/copy/13508/hosts": (11.2615279s)
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (66.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/13508.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /etc/ssl/certs/13508.pem"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /etc/ssl/certs/13508.pem": (12.2328468s)
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/13508.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /usr/share/ca-certificates/13508.pem"
E0116 02:05:09.769990   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /usr/share/ca-certificates/13508.pem": (10.7399009s)
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1972: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.9753429s)
functional_test.go:1998: Checking for existence of /etc/ssl/certs/135082.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /etc/ssl/certs/135082.pem"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /etc/ssl/certs/135082.pem": (12.0735862s)
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/135082.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /usr/share/ca-certificates/135082.pem"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /usr/share/ca-certificates/135082.pem": (10.0482554s)
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1999: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.082383s)
--- PASS: TestFunctional/parallel/CertSync (66.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-833600 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (11.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo systemctl is-active crio"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-833600 ssh "sudo systemctl is-active crio": exit status 1 (11.3267623s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:03:14.494177   10704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (11.33s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2287: (dbg) Done: out/minikube-windows-amd64.exe license: (3.4849543s)
--- PASS: TestFunctional/parallel/License (3.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-833600 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-833600 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-sn82k" [6b661b68-f1bc-4123-9a56-c4cbd6627903] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-sn82k" [6b661b68-f1bc-4123-9a56-c4cbd6627903] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.009707s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 version --short
--- PASS: TestFunctional/parallel/Version/short (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (9.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 version -o=json --components
functional_test.go:2269: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 version -o=json --components: (9.0905859s)
--- PASS: TestFunctional/parallel/Version/components (9.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image ls --format short --alsologtostderr: (7.5337429s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-833600 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-833600
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-833600
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-833600 image ls --format short --alsologtostderr:
W0116 02:06:18.816870    6372 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0116 02:06:18.898482    6372 out.go:296] Setting OutFile to fd 744 ...
I0116 02:06:18.899744    6372 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:06:18.899744    6372 out.go:309] Setting ErrFile to fd 736...
I0116 02:06:18.899744    6372 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:06:18.919011    6372 config.go:182] Loaded profile config "functional-833600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0116 02:06:18.919585    6372 config.go:182] Loaded profile config "functional-833600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0116 02:06:18.920052    6372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-833600 ).state
I0116 02:06:21.176141    6372 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0116 02:06:21.176141    6372 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:21.193234    6372 ssh_runner.go:195] Run: systemctl --version
I0116 02:06:21.193234    6372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-833600 ).state
I0116 02:06:23.409668    6372 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0116 02:06:23.409668    6372 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:23.409785    6372 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-833600 ).networkadapters[0]).ipaddresses[0]
I0116 02:06:26.016698    6372 main.go:141] libmachine: [stdout =====>] : 172.27.127.147

                                                
                                                
I0116 02:06:26.016987    6372 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:26.017200    6372 sshutil.go:53] new ssh client: &{IP:172.27.127.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-833600\id_rsa Username:docker}
I0116 02:06:26.119884    6372 ssh_runner.go:235] Completed: systemctl --version: (4.9266179s)
I0116 02:06:26.135588    6372 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image ls --format table --alsologtostderr: (7.6585551s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-833600 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | a8758716bb6aa | 187MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-833600 | 558c7510cdebf | 30B    |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| docker.io/library/nginx                     | alpine            | 529b5644c430c | 42.6MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-833600 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-833600 image ls --format table --alsologtostderr:
W0116 02:06:35.765487    8700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0116 02:06:35.851011    8700 out.go:296] Setting OutFile to fd 952 ...
I0116 02:06:35.852359    8700 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:06:35.852402    8700 out.go:309] Setting ErrFile to fd 716...
I0116 02:06:35.852437    8700 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:06:35.868813    8700 config.go:182] Loaded profile config "functional-833600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0116 02:06:35.869349    8700 config.go:182] Loaded profile config "functional-833600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0116 02:06:35.869696    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-833600 ).state
I0116 02:06:38.110840    8700 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0116 02:06:38.110840    8700 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:38.126823    8700 ssh_runner.go:195] Run: systemctl --version
I0116 02:06:38.127818    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-833600 ).state
I0116 02:06:40.409296    8700 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0116 02:06:40.409480    8700 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:40.409480    8700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-833600 ).networkadapters[0]).ipaddresses[0]
I0116 02:06:43.069428    8700 main.go:141] libmachine: [stdout =====>] : 172.27.127.147

                                                
                                                
I0116 02:06:43.069610    8700 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:43.069974    8700 sshutil.go:53] new ssh client: &{IP:172.27.127.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-833600\id_rsa Username:docker}
I0116 02:06:43.174596    8700 ssh_runner.go:235] Completed: systemctl --version: (5.0477402s)
I0116 02:06:43.185898    8700 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image ls --format json --alsologtostderr: (7.5441028s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-833600 image ls --format json --alsologtostderr:
[{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"42600000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"558c7510cdebfebd8ed005f01db561ef46b7608795f26a94b
9479885f2808af2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-833600"],"size":"30"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-833600"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["regist
ry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-833600 image ls --format json --alsologtostderr:
W0116 02:06:28.226730   10604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0116 02:06:28.309804   10604 out.go:296] Setting OutFile to fd 792 ...
I0116 02:06:28.310557   10604 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:06:28.310557   10604 out.go:309] Setting ErrFile to fd 892...
I0116 02:06:28.310557   10604 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:06:28.329627   10604 config.go:182] Loaded profile config "functional-833600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0116 02:06:28.330042   10604 config.go:182] Loaded profile config "functional-833600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0116 02:06:28.330774   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-833600 ).state
I0116 02:06:30.507618   10604 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0116 02:06:30.507742   10604 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:30.525368   10604 ssh_runner.go:195] Run: systemctl --version
I0116 02:06:30.525368   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-833600 ).state
I0116 02:06:32.780476   10604 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0116 02:06:32.780718   10604 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:32.780718   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-833600 ).networkadapters[0]).ipaddresses[0]
I0116 02:06:35.442826   10604 main.go:141] libmachine: [stdout =====>] : 172.27.127.147

                                                
                                                
I0116 02:06:35.442887   10604 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:35.443196   10604 sshutil.go:53] new ssh client: &{IP:172.27.127.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-833600\id_rsa Username:docker}
I0116 02:06:35.544840   10604 ssh_runner.go:235] Completed: systemctl --version: (5.0194385s)
I0116 02:06:35.554963   10604 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image ls --format yaml --alsologtostderr: (7.6934743s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-833600 image ls --format yaml --alsologtostderr:
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 558c7510cdebfebd8ed005f01db561ef46b7608795f26a94b9479885f2808af2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-833600
size: "30"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 529b5644c430c06553d2e8082c6713fe19a4169c9dc2369cbb960081f52924ff
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "42600000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-833600
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-833600 image ls --format yaml --alsologtostderr:
W0116 02:06:20.546250    8776 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0116 02:06:20.644928    8776 out.go:296] Setting OutFile to fd 328 ...
I0116 02:06:20.659065    8776 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:06:20.659065    8776 out.go:309] Setting ErrFile to fd 848...
I0116 02:06:20.659176    8776 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:06:20.674600    8776 config.go:182] Loaded profile config "functional-833600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0116 02:06:20.675597    8776 config.go:182] Loaded profile config "functional-833600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0116 02:06:20.675915    8776 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-833600 ).state
I0116 02:06:22.888194    8776 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0116 02:06:22.888194    8776 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:22.905481    8776 ssh_runner.go:195] Run: systemctl --version
I0116 02:06:22.905481    8776 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-833600 ).state
I0116 02:06:25.157988    8776 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0116 02:06:25.158054    8776 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:25.158054    8776 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-833600 ).networkadapters[0]).ipaddresses[0]
I0116 02:06:27.850563    8776 main.go:141] libmachine: [stdout =====>] : 172.27.127.147

                                                
                                                
I0116 02:06:27.850774    8776 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:27.850998    8776 sshutil.go:53] new ssh client: &{IP:172.27.127.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-833600\id_rsa Username:docker}
I0116 02:06:27.953243    8776 ssh_runner.go:235] Completed: systemctl --version: (5.0477293s)
I0116 02:06:27.964624    8776 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (27.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-833600 ssh pgrep buildkitd: exit status 1 (9.8739739s)

                                                
                                                
** stderr ** 
	W0116 02:06:26.344450   13156 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image build -t localhost/my-image:functional-833600 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image build -t localhost/my-image:functional-833600 testdata\build --alsologtostderr: (10.0137883s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-833600 image build -t localhost/my-image:functional-833600 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 99da2f071a69
Removing intermediate container 99da2f071a69
---> 9612b9426a9a
Step 3/3 : ADD content.txt /
---> 4a3c9adc3939
Successfully built 4a3c9adc3939
Successfully tagged localhost/my-image:functional-833600
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-833600 image build -t localhost/my-image:functional-833600 testdata\build --alsologtostderr:
W0116 02:06:36.216195   12528 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0116 02:06:36.303919   12528 out.go:296] Setting OutFile to fd 952 ...
I0116 02:06:36.321773   12528 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:06:36.321773   12528 out.go:309] Setting ErrFile to fd 716...
I0116 02:06:36.321898   12528 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0116 02:06:36.345320   12528 config.go:182] Loaded profile config "functional-833600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0116 02:06:36.368931   12528 config.go:182] Loaded profile config "functional-833600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0116 02:06:36.370395   12528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-833600 ).state
I0116 02:06:38.601367   12528 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0116 02:06:38.601529   12528 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:38.623146   12528 ssh_runner.go:195] Run: systemctl --version
I0116 02:06:38.623146   12528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-833600 ).state
I0116 02:06:40.885775   12528 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0116 02:06:40.885775   12528 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:40.885775   12528 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-833600 ).networkadapters[0]).ipaddresses[0]
I0116 02:06:43.509414   12528 main.go:141] libmachine: [stdout =====>] : 172.27.127.147

                                                
                                                
I0116 02:06:43.509414   12528 main.go:141] libmachine: [stderr =====>] : 
I0116 02:06:43.509414   12528 sshutil.go:53] new ssh client: &{IP:172.27.127.147 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-833600\id_rsa Username:docker}
I0116 02:06:43.610636   12528 ssh_runner.go:235] Completed: systemctl --version: (4.9874578s)
I0116 02:06:43.610802   12528 build_images.go:151] Building image from path: C:\Users\jenkins.minikube3\AppData\Local\Temp\build.590378766.tar
I0116 02:06:43.628260   12528 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0116 02:06:43.659881   12528 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.590378766.tar
I0116 02:06:43.666223   12528 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.590378766.tar: stat -c "%s %y" /var/lib/minikube/build/build.590378766.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.590378766.tar': No such file or directory
I0116 02:06:43.666476   12528 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\AppData\Local\Temp\build.590378766.tar --> /var/lib/minikube/build/build.590378766.tar (3072 bytes)
I0116 02:06:43.773507   12528 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.590378766
I0116 02:06:43.828929   12528 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.590378766 -xf /var/lib/minikube/build/build.590378766.tar
I0116 02:06:43.844557   12528 docker.go:360] Building image: /var/lib/minikube/build/build.590378766
I0116 02:06:43.856238   12528 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-833600 /var/lib/minikube/build/build.590378766
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0116 02:06:45.986300   12528 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-833600 /var/lib/minikube/build/build.590378766: (2.1300478s)
I0116 02:06:46.001161   12528 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.590378766
I0116 02:06:46.040058   12528 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.590378766.tar
I0116 02:06:46.059584   12528 build_images.go:207] Built localhost/my-image:functional-833600 from C:\Users\jenkins.minikube3\AppData\Local\Temp\build.590378766.tar
I0116 02:06:46.059748   12528 build_images.go:123] succeeded building to: functional-833600
I0116 02:06:46.059806   12528 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image ls: (7.40353s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (27.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.9452928s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-833600
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image load --daemon gcr.io/google-containers/addon-resizer:functional-833600 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image load --daemon gcr.io/google-containers/addon-resizer:functional-833600 --alsologtostderr: (15.9176846s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image ls: (8.5744655s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (24.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 service list
functional_test.go:1458: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 service list: (14.2892707s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image load --daemon gcr.io/google-containers/addon-resizer:functional-833600 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image load --daemon gcr.io/google-containers/addon-resizer:functional-833600 --alsologtostderr: (12.8209312s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image ls: (8.4707253s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (21.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 service list -o json
E0116 02:03:46.589721   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
functional_test.go:1488: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 service list -o json: (14.1319236s)
functional_test.go:1493: Took "14.1322534s" to run "out/minikube-windows-amd64.exe -p functional-833600 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (27.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.5950546s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-833600
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image load --daemon gcr.io/google-containers/addon-resizer:functional-833600 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image load --daemon gcr.io/google-containers/addon-resizer:functional-833600 --alsologtostderr: (15.2218785s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image ls: (8.2930673s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (27.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1274: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (8.8615104s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (8.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1309: (dbg) Done: out/minikube-windows-amd64.exe profile list: (8.4093316s)
functional_test.go:1314: Took "8.409692s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1328: Took "295.3707ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (8.71s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (8.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1360: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (8.4656523s)
functional_test.go:1365: Took "8.4659768s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1378: Took "317.0117ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (8.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image save gcr.io/google-containers/addon-resizer:functional-833600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image save gcr.io/google-containers/addon-resizer:functional-833600 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (9.0377084s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (9.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (17.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image rm gcr.io/google-containers/addon-resizer:functional-833600 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image rm gcr.io/google-containers/addon-resizer:functional-833600 --alsologtostderr: (8.1820843s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image ls: (8.9963894s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (17.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-833600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-833600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-833600 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-833600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1036: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 4024: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (20.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (11.8529055s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image ls: (8.6878542s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (20.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-833600 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-833600 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [20f596df-9332-4709-b07e-99ab4eac38b8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [20f596df-9332-4709-b07e-99ab4eac38b8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.0185542s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-833600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 10528: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (46.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-833600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-833600"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-833600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-833600": (31.5196712s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-833600 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-833600 docker-env | Invoke-Expression ; docker images": (15.0422356s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (46.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-833600
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 image save --daemon gcr.io/google-containers/addon-resizer:functional-833600 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 image save --daemon gcr.io/google-containers/addon-resizer:functional-833600 --alsologtostderr: (9.891977s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-833600
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 update-context --alsologtostderr -v=2: (2.7634477s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 update-context --alsologtostderr -v=2: (2.6448306s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-833600 update-context --alsologtostderr -v=2
functional_test.go:2118: (dbg) Done: out/minikube-windows-amd64.exe -p functional-833600 update-context --alsologtostderr -v=2: (2.5816423s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.59s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.47s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-833600
--- PASS: TestFunctional/delete_addon-resizer_images (0.47s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-833600
--- PASS: TestFunctional/delete_my-image_image (0.20s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-833600
--- PASS: TestFunctional/delete_minikube_cached_images (0.20s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (187.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-523700 --driver=hyperv
E0116 02:13:13.013166   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:13:13.028677   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:13:13.044622   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:13:13.076230   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:13:13.124399   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:13:13.219274   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:13:13.394136   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:13:13.714678   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:13:14.368360   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:13:15.652224   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:13:18.218776   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:13:23.340410   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:13:33.581129   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:13:46.591043   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 02:13:54.068866   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:14:35.044525   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-523700 --driver=hyperv: (3m7.5228527s)
--- PASS: TestImageBuild/serial/Setup (187.52s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (9.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-523700
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-523700: (9.0817649s)
--- PASS: TestImageBuild/serial/NormalBuild (9.08s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-523700
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-523700: (8.6086873s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.61s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-523700
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-523700: (7.6289813s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.63s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-523700
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-523700: (7.5042012s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.50s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (68.14s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-556300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-556300 --output=json --user=testUser: (1m8.1434807s)
--- PASS: TestJSONOutput/stop/Command (68.14s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-370300 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-370300 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (279.9115ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8ee08899-6bbd-4da9-a55a-938525ed1930","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-370300] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7a6b660-7122-4a3d-ae1d-fdd55959c695","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube3\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"43753914-3b50-4e2b-b5e0-eb4ba9ecd347","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f3dd7190-251e-4c0d-9626-89747d12d04d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"a2030033-3c4f-4bac-b258-4e5bb23b887e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17967"}}
	{"specversion":"1.0","id":"cfae601c-6565-4093-bc49-b015719e536b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ac37aecf-8faf-4f6a-abc2-497cd77ae91e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:28:18.780394    3800 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-370300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-370300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-370300: (1.2930601s)
--- PASS: TestErrorJSONOutput (1.57s)

                                                
                                    
x
+
TestMainNoArgs (0.25s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.25s)

                                                
                                    
x
+
TestMinikubeProfile (489.38s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-734000 --driver=hyperv
E0116 02:28:46.591626   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 02:29:36.206205   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-734000 --driver=hyperv: (3m8.5149075s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-734000 --driver=hyperv
E0116 02:33:13.013763   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:33:46.602100   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-734000 --driver=hyperv: (3m11.1496786s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-734000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (14.9758415s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-734000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (14.8573303s)
helpers_test.go:175: Cleaning up "second-734000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-734000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-734000: (41.6547876s)
helpers_test.go:175: Cleaning up "first-734000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-734000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-734000: (37.2547286s)
--- PASS: TestMinikubeProfile (489.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (148.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-771100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0116 02:38:13.013116   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:38:29.807159   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 02:38:46.592640   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-771100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m27.2016826s)
--- PASS: TestMountStart/serial/StartWithMountFirst (148.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-771100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-771100 ssh -- ls /minikube-host: (9.6134699s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (148.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-771100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-771100 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m27.6153561s)
--- PASS: TestMountStart/serial/StartWithMountSecond (148.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.52s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-771100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-771100 ssh -- ls /minikube-host: (9.5183533s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.52s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (26.05s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-771100 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-771100 --alsologtostderr -v=5: (26.04704s)
--- PASS: TestMountStart/serial/DeleteFirst (26.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-771100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-771100 ssh -- ls /minikube-host: (9.3950763s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (21.83s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-771100
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-771100: (21.8287442s)
--- PASS: TestMountStart/serial/Stop (21.83s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (111.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-771100
E0116 02:43:13.024249   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:43:46.601759   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-771100: (1m50.0291523s)
--- PASS: TestMountStart/serial/RestartStopped (111.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-771100 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-771100 ssh -- ls /minikube-host: (9.4911191s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.49s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (404.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-853900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0116 02:46:16.228098   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:48:13.022027   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:48:46.600069   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
multinode_test.go:86: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-853900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m20.4091647s)
multinode_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 status --alsologtostderr
multinode_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 status --alsologtostderr: (23.7655502s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (404.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- rollout status deployment/busybox: (3.0688397s)
multinode_test.go:521: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- exec busybox-5bc68d56bd-9t8fh -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- exec busybox-5bc68d56bd-9t8fh -- nslookup kubernetes.io: (1.8370715s)
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- exec busybox-5bc68d56bd-fp6wc -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- exec busybox-5bc68d56bd-9t8fh -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- exec busybox-5bc68d56bd-fp6wc -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- exec busybox-5bc68d56bd-9t8fh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-853900 -- exec busybox-5bc68d56bd-fp6wc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.42s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (213.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-853900 -v 3 --alsologtostderr
E0116 02:53:13.026989   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 02:53:46.607187   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 02:55:09.816132   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
multinode_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-853900 -v 3 --alsologtostderr: (2m57.527082s)
multinode_test.go:117: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 status --alsologtostderr
multinode_test.go:117: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 status --alsologtostderr: (35.6279456s)
--- PASS: TestMultiNode/serial/AddNode (213.16s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-853900 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (7.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (7.6716126s)
--- PASS: TestMultiNode/serial/ProfileList (7.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (359.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 status --output json --alsologtostderr: (35.4327554s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 cp testdata\cp-test.txt multinode-853900:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 cp testdata\cp-test.txt multinode-853900:/home/docker/cp-test.txt: (9.34752s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900 "sudo cat /home/docker/cp-test.txt": (9.3549453s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3874047493\001\cp-test_multinode-853900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3874047493\001\cp-test_multinode-853900.txt: (9.3083482s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900 "sudo cat /home/docker/cp-test.txt": (9.4990234s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900:/home/docker/cp-test.txt multinode-853900-m02:/home/docker/cp-test_multinode-853900_multinode-853900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900:/home/docker/cp-test.txt multinode-853900-m02:/home/docker/cp-test_multinode-853900_multinode-853900-m02.txt: (16.5138919s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900 "sudo cat /home/docker/cp-test.txt"
E0116 02:58:13.032746   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900 "sudo cat /home/docker/cp-test.txt": (9.4037324s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m02 "sudo cat /home/docker/cp-test_multinode-853900_multinode-853900-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m02 "sudo cat /home/docker/cp-test_multinode-853900_multinode-853900-m02.txt": (9.4821802s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900:/home/docker/cp-test.txt multinode-853900-m03:/home/docker/cp-test_multinode-853900_multinode-853900-m03.txt
E0116 02:58:46.611729   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900:/home/docker/cp-test.txt multinode-853900-m03:/home/docker/cp-test_multinode-853900_multinode-853900-m03.txt: (16.3053531s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900 "sudo cat /home/docker/cp-test.txt": (9.3773887s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m03 "sudo cat /home/docker/cp-test_multinode-853900_multinode-853900-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m03 "sudo cat /home/docker/cp-test_multinode-853900_multinode-853900-m03.txt": (9.2850988s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 cp testdata\cp-test.txt multinode-853900-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 cp testdata\cp-test.txt multinode-853900-m02:/home/docker/cp-test.txt: (9.4854886s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m02 "sudo cat /home/docker/cp-test.txt": (9.2516386s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3874047493\001\cp-test_multinode-853900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3874047493\001\cp-test_multinode-853900-m02.txt: (9.4824554s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m02 "sudo cat /home/docker/cp-test.txt": (9.4690509s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900-m02:/home/docker/cp-test.txt multinode-853900:/home/docker/cp-test_multinode-853900-m02_multinode-853900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900-m02:/home/docker/cp-test.txt multinode-853900:/home/docker/cp-test_multinode-853900-m02_multinode-853900.txt: (16.2079443s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m02 "sudo cat /home/docker/cp-test.txt": (9.6038757s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900 "sudo cat /home/docker/cp-test_multinode-853900-m02_multinode-853900.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900 "sudo cat /home/docker/cp-test_multinode-853900-m02_multinode-853900.txt": (9.4088909s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900-m02:/home/docker/cp-test.txt multinode-853900-m03:/home/docker/cp-test_multinode-853900-m02_multinode-853900-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900-m02:/home/docker/cp-test.txt multinode-853900-m03:/home/docker/cp-test_multinode-853900-m02_multinode-853900-m03.txt: (16.3591775s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m02 "sudo cat /home/docker/cp-test.txt": (9.507097s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m03 "sudo cat /home/docker/cp-test_multinode-853900-m02_multinode-853900-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m03 "sudo cat /home/docker/cp-test_multinode-853900-m02_multinode-853900-m03.txt": (9.4255536s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 cp testdata\cp-test.txt multinode-853900-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 cp testdata\cp-test.txt multinode-853900-m03:/home/docker/cp-test.txt: (9.5059165s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m03 "sudo cat /home/docker/cp-test.txt": (9.471806s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3874047493\001\cp-test_multinode-853900-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile3874047493\001\cp-test_multinode-853900-m03.txt: (9.4874616s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m03 "sudo cat /home/docker/cp-test.txt": (9.4616765s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900-m03:/home/docker/cp-test.txt multinode-853900:/home/docker/cp-test_multinode-853900-m03_multinode-853900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900-m03:/home/docker/cp-test.txt multinode-853900:/home/docker/cp-test_multinode-853900-m03_multinode-853900.txt: (16.4995841s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m03 "sudo cat /home/docker/cp-test.txt": (9.3842573s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900 "sudo cat /home/docker/cp-test_multinode-853900-m03_multinode-853900.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900 "sudo cat /home/docker/cp-test_multinode-853900-m03_multinode-853900.txt": (9.2830821s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900-m03:/home/docker/cp-test.txt multinode-853900-m02:/home/docker/cp-test_multinode-853900-m03_multinode-853900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 cp multinode-853900-m03:/home/docker/cp-test.txt multinode-853900-m02:/home/docker/cp-test_multinode-853900-m03_multinode-853900-m02.txt: (16.3102676s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m03 "sudo cat /home/docker/cp-test.txt": (9.3180581s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m02 "sudo cat /home/docker/cp-test_multinode-853900-m03_multinode-853900-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 ssh -n multinode-853900-m02 "sudo cat /home/docker/cp-test_multinode-853900-m03_multinode-853900-m02.txt": (9.4405739s)
--- PASS: TestMultiNode/serial/CopyFile (359.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (66.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 node stop m03
E0116 03:02:56.241244   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
multinode_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 node stop m03: (14.2288586s)
multinode_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 status
E0116 03:03:13.034520   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-853900 status: exit status 7 (26.030947s)

                                                
                                                
-- stdout --
	multinode-853900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-853900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-853900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:02:56.679113   13848 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:251: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 status --alsologtostderr
E0116 03:03:46.611274   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-853900 status --alsologtostderr: exit status 7 (26.0263415s)

                                                
                                                
-- stdout --
	multinode-853900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-853900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-853900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:03:22.712352    9700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0116 03:03:22.796399    9700 out.go:296] Setting OutFile to fd 716 ...
	I0116 03:03:22.797563    9700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:03:22.797611    9700 out.go:309] Setting ErrFile to fd 892...
	I0116 03:03:22.797640    9700 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 03:03:22.813988    9700 out.go:303] Setting JSON to false
	I0116 03:03:22.814058    9700 mustload.go:65] Loading cluster: multinode-853900
	I0116 03:03:22.814168    9700 notify.go:220] Checking for updates...
	I0116 03:03:22.814928    9700 config.go:182] Loaded profile config "multinode-853900": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 03:03:22.814928    9700 status.go:255] checking status of multinode-853900 ...
	I0116 03:03:22.815955    9700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:03:25.011228    9700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:03:25.011291    9700 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:03:25.011291    9700 status.go:330] multinode-853900 host status = "Running" (err=<nil>)
	I0116 03:03:25.011291    9700 host.go:66] Checking if "multinode-853900" exists ...
	I0116 03:03:25.012115    9700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:03:27.218490    9700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:03:27.218490    9700 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:03:27.218582    9700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:03:29.781670    9700 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 03:03:29.781670    9700 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:03:29.781883    9700 host.go:66] Checking if "multinode-853900" exists ...
	I0116 03:03:29.799163    9700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 03:03:29.799163    9700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900 ).state
	I0116 03:03:31.998864    9700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:03:31.998864    9700 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:03:31.999061    9700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900 ).networkadapters[0]).ipaddresses[0]
	I0116 03:03:34.587735    9700 main.go:141] libmachine: [stdout =====>] : 172.27.112.69
	
	I0116 03:03:34.587735    9700 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:03:34.588166    9700 sshutil.go:53] new ssh client: &{IP:172.27.112.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900\id_rsa Username:docker}
	I0116 03:03:34.690795    9700 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8916001s)
	I0116 03:03:34.703714    9700 ssh_runner.go:195] Run: systemctl --version
	I0116 03:03:34.729498    9700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:03:34.751459    9700 kubeconfig.go:92] found "multinode-853900" server: "https://172.27.112.69:8443"
	I0116 03:03:34.751459    9700 api_server.go:166] Checking apiserver status ...
	I0116 03:03:34.768786    9700 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0116 03:03:34.802351    9700 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2024/cgroup
	I0116 03:03:34.816803    9700 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod41ea37f04f983128860ae937c9f060bb/e829a48e9f669d448cfcffc669aa82dda3df2019316862c93f095d62535df965"
	I0116 03:03:34.830432    9700 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod41ea37f04f983128860ae937c9f060bb/e829a48e9f669d448cfcffc669aa82dda3df2019316862c93f095d62535df965/freezer.state
	I0116 03:03:34.844406    9700 api_server.go:204] freezer state: "THAWED"
	I0116 03:03:34.844406    9700 api_server.go:253] Checking apiserver healthz at https://172.27.112.69:8443/healthz ...
	I0116 03:03:34.852791    9700 api_server.go:279] https://172.27.112.69:8443/healthz returned 200:
	ok
	I0116 03:03:34.852791    9700 status.go:421] multinode-853900 apiserver status = Running (err=<nil>)
	I0116 03:03:34.852791    9700 status.go:257] multinode-853900 status: &{Name:multinode-853900 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0116 03:03:34.853759    9700 status.go:255] checking status of multinode-853900-m02 ...
	I0116 03:03:34.853826    9700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:03:37.052989    9700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:03:37.052989    9700 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:03:37.053088    9700 status.go:330] multinode-853900-m02 host status = "Running" (err=<nil>)
	I0116 03:03:37.053088    9700 host.go:66] Checking if "multinode-853900-m02" exists ...
	I0116 03:03:37.053988    9700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:03:39.188043    9700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:03:39.188290    9700 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:03:39.188369    9700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:03:41.701520    9700 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 03:03:41.701520    9700 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:03:41.701520    9700 host.go:66] Checking if "multinode-853900-m02" exists ...
	I0116 03:03:41.714835    9700 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0116 03:03:41.715836    9700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m02 ).state
	I0116 03:03:43.812605    9700 main.go:141] libmachine: [stdout =====>] : Running
	
	I0116 03:03:43.812605    9700 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:03:43.812783    9700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-853900-m02 ).networkadapters[0]).ipaddresses[0]
	I0116 03:03:46.328329    9700 main.go:141] libmachine: [stdout =====>] : 172.27.122.78
	
	I0116 03:03:46.328329    9700 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:03:46.328595    9700 sshutil.go:53] new ssh client: &{IP:172.27.122.78 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-853900-m02\id_rsa Username:docker}
	I0116 03:03:46.429271    9700 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.713405s)
	I0116 03:03:46.443777    9700 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0116 03:03:46.463753    9700 status.go:257] multinode-853900-m02 status: &{Name:multinode-853900-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0116 03:03:46.463753    9700 status.go:255] checking status of multinode-853900-m03 ...
	I0116 03:03:46.464947    9700 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-853900-m03 ).state
	I0116 03:03:48.569377    9700 main.go:141] libmachine: [stdout =====>] : Off
	
	I0116 03:03:48.569606    9700 main.go:141] libmachine: [stderr =====>] : 
	I0116 03:03:48.569790    9700 status.go:330] multinode-853900-m03 host status = "Stopped" (err=<nil>)
	I0116 03:03:48.569853    9700 status.go:343] host is not running, skipping remaining checks
	I0116 03:03:48.569853    9700 status.go:257] multinode-853900-m03 status: &{Name:multinode-853900-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (66.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (165.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 node start m03 --alsologtostderr: (2m9.8634871s)
multinode_test.go:289: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-853900 status
multinode_test.go:289: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-853900 status: (35.3930759s)
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (165.46s)

                                                
                                    
x
+
TestPreload (463.77s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-700700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0116 03:18:13.028069   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 03:18:46.608393   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
E0116 03:19:36.256043   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-700700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (3m46.0920469s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-700700 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-700700 image pull gcr.io/k8s-minikube/busybox: (8.3099576s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-700700
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-700700: (33.3249922s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-700700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0116 03:23:13.029973   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 03:23:46.623619   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-700700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m31.8392873s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-700700 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-700700 image list: (7.4339564s)
helpers_test.go:175: Cleaning up "test-preload-700700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-700700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-700700: (36.7662395s)
--- PASS: TestPreload (463.77s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1063.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.2211167376.exe start -p running-upgrade-175600 --memory=2200 --vm-driver=hyperv
E0116 03:33:13.047584   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.2211167376.exe start -p running-upgrade-175600 --memory=2200 --vm-driver=hyperv: (8m15.5063228s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-175600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0116 03:38:46.628506   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-175600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (8m25.2468056s)
helpers_test.go:175: Cleaning up "running-upgrade-175600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-175600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-175600: (1m1.9588852s)
--- PASS: TestRunningBinaryUpgrade (1063.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (1152.07s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-069600 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv
E0116 03:48:46.633958   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-069600 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: (6m22.0047895s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-069600
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-069600: (34.0498458s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-069600 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-069600 status --format={{.Host}}: exit status 7 (2.5247352s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:55:41.529292    9124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-069600 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-069600 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (7m32.5353395s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-069600 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-069600 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-069600 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv: exit status 106 (297.276ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-069600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 04:03:16.783158    3780 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-069600
	    minikube start -p kubernetes-upgrade-069600 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0696002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-069600 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-069600 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-069600 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=hyperv: (4m0.766675s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-069600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-069600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-069600: (39.710045s)
--- PASS: TestKubernetesUpgrade (1152.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-175600 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-175600 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (317.0134ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-175600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:30:13.085201    8444 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                    
x
+
TestPause/serial/Start (384.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-143300 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-143300 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (6m24.7375849s)
--- PASS: TestPause/serial/Start (384.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (338.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-143300 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-143300 --alsologtostderr -v=1 --driver=hyperv: (5m38.1424981s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (338.18s)

                                                
                                    
x
+
TestPause/serial/Pause (8.58s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-143300 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-143300 --alsologtostderr -v=5: (8.5788873s)
--- PASS: TestPause/serial/Pause (8.58s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (13.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-143300 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-143300 --output=json --layout=cluster: exit status 2 (13.1366329s)

                                                
                                                
-- stdout --
	{"Name":"pause-143300","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-143300","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 03:47:08.858278    2436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (13.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (8.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-143300 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-143300 --alsologtostderr -v=5: (8.6260793s)
--- PASS: TestPause/serial/Unpause (8.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (8.7s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-143300 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-143300 --alsologtostderr -v=5: (8.7007954s)
--- PASS: TestPause/serial/PauseAgain (8.70s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (54.65s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-143300 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-143300 --alsologtostderr -v=5: (54.6495842s)
--- PASS: TestPause/serial/DeletePaused (54.65s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (10.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.1651368s)
--- PASS: TestPause/serial/VerifyDeletedResources (10.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (876.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.3758798881.exe start -p stopped-upgrade-978100 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.3758798881.exe start -p stopped-upgrade-978100 --memory=2200 --vm-driver=hyperv: (7m0.2707449s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.3758798881.exe -p stopped-upgrade-978100 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.3758798881.exe -p stopped-upgrade-978100 stop: (35.0332817s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-978100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0116 03:58:13.058772   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-833600\client.crt: The system cannot find the path specified.
E0116 03:58:46.637643   13508 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-179200\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-978100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m1.1912617s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (876.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-978100
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-978100: (9.1252309s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.13s)

                                                
                                    

Test skip (32/212)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-833600 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-833600 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 1912: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-833600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-833600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0503974s)

                                                
                                                
-- stdout --
	* [functional-833600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:05:27.316654    5812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0116 02:05:27.405627    5812 out.go:296] Setting OutFile to fd 984 ...
	I0116 02:05:27.406285    5812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:05:27.406285    5812 out.go:309] Setting ErrFile to fd 952...
	I0116 02:05:27.406285    5812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:05:27.429626    5812 out.go:303] Setting JSON to false
	I0116 02:05:27.433628    5812 start.go:128] hostinfo: {"hostname":"minikube3","uptime":49118,"bootTime":1705321609,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0116 02:05:27.433628    5812 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0116 02:05:27.435637    5812 out.go:177] * [functional-833600] minikube v1.32.0 on Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0116 02:05:27.436629    5812 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 02:05:27.437627    5812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:05:27.436629    5812 notify.go:220] Checking for updates...
	I0116 02:05:27.438628    5812 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0116 02:05:27.439634    5812 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:05:27.440632    5812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:05:27.442632    5812 config.go:182] Loaded profile config "functional-833600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 02:05:27.443626    5812 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-833600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-833600 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0392757s)

                                                
                                                
-- stdout --
	* [functional-833600] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=17967
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0116 02:05:30.738320    9248 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0116 02:05:30.817311    9248 out.go:296] Setting OutFile to fd 736 ...
	I0116 02:05:30.817311    9248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:05:30.817311    9248 out.go:309] Setting ErrFile to fd 584...
	I0116 02:05:30.817311    9248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0116 02:05:30.840303    9248 out.go:303] Setting JSON to false
	I0116 02:05:30.844300    9248 start.go:128] hostinfo: {"hostname":"minikube3","uptime":49121,"bootTime":1705321609,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3930 Build 19045.3930","kernelVersion":"10.0.19045.3930 Build 19045.3930","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0116 02:05:30.845304    9248 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0116 02:05:30.846324    9248 out.go:177] * [functional-833600] minikube v1.32.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.3930 Build 19045.3930
	I0116 02:05:30.847305    9248 notify.go:220] Checking for updates...
	I0116 02:05:30.847305    9248 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0116 02:05:30.848320    9248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0116 02:05:30.849318    9248 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0116 02:05:30.850317    9248 out.go:177]   - MINIKUBE_LOCATION=17967
	I0116 02:05:30.851303    9248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0116 02:05:30.853321    9248 config.go:182] Loaded profile config "functional-833600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0116 02:05:30.854321    9248 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard